How to beginner · 4 min read

How to use few-shot prompt template in LangChain

Quick answer
Use the FewShotPromptTemplate class from langchain_core.prompts to create few-shot prompts by providing example prompts and a suffix template. This enables structured few-shot learning with any LLM in LangChain.

PREREQUISITES

  • Python 3.8+
  • OpenAI API key (free tier works)
  • pip install langchain_openai>=0.2.0

Setup

Install the required LangChain package and set your OpenAI API key in the environment variables.

  • Run pip install langchain_openai to install LangChain's OpenAI integration.
  • Set your API key in your shell: export OPENAI_API_KEY='your_api_key_here' (Linux/macOS) or setx OPENAI_API_KEY "your_api_key_here" (Windows).
bash
pip install langchain_openai

Step by step

This example demonstrates creating a few-shot prompt template with two examples and a suffix prompt, then calling OpenAI's gpt-4o model via LangChain to generate a completion.

python
import os
from langchain_openai import ChatOpenAI
from langchain_core.prompts import (
    FewShotPromptTemplate, 
    PromptTemplate, 
    Example
)

# Define example prompts
examples = [
    Example(
        input="Translate 'Hello' to French.",
        output="Bonjour"
    ),
    Example(
        input="Translate 'Goodbye' to French.",
        output="Au revoir"
    )
]

# Define the suffix template where the user input will be inserted
suffix_template = "Translate the following sentence to French: {input}"

# Create a PromptTemplate for the suffix
suffix_prompt = PromptTemplate(
    input_variables=["input"],
    template=suffix_template
)

# Create the FewShotPromptTemplate
few_shot_prompt = FewShotPromptTemplate(
    examples=examples,
    example_prompt=PromptTemplate(
        input_variables=["input"],
        template="Input: {input}\nOutput: {output}"
    ),
    suffix=suffix_prompt,
    input_variables=["input"],
    example_separator="\n\n"
)

# Format the prompt with a new input
formatted_prompt = few_shot_prompt.format(input="I love programming.")
print("Formatted prompt:\n", formatted_prompt)

# Initialize LangChain OpenAI client
client = ChatOpenAI(model="gpt-4o", temperature=0)

# Call the model with the formatted prompt
response = client.invoke([{"role": "user", "content": formatted_prompt}])
print("\nModel response:\n", response.content)
output
Formatted prompt:
Input: Translate 'Hello' to French.
Output: Bonjour

Input: Translate 'Goodbye' to French.
Output: Au revoir

Translate the following sentence to French: I love programming.

Model response:
J'aime programmer.

Common variations

  • Use different LLMs by changing model in ChatOpenAI, e.g., gpt-4o-mini.
  • Use async calls with await client.acall([...]) in an async function.
  • Customize example formatting by modifying example_prompt templates.
  • Combine few-shot templates with LangChain chains for complex workflows.

Troubleshooting

  • If you get an authentication error, verify your OPENAI_API_KEY environment variable is set correctly.
  • If the prompt formatting is incorrect, check that input_variables match your template placeholders.
  • For rate limits, reduce request frequency or check your OpenAI usage dashboard.

Key Takeaways

  • Use FewShotPromptTemplate to structure few-shot examples and suffix prompts in LangChain.
  • Always define input_variables explicitly to match your prompt placeholders.
  • Format the prompt before sending it to the LLM client for best control over input.
  • You can easily swap models or use async calls with LangChain's ChatOpenAI client.
  • Check environment variables and template syntax carefully to avoid common errors.
Verified 2026-04 · gpt-4o
Verify ↗