How to beginner · 3 min read

How to format prompt with variables in LangChain

Quick answer
Use ChatPromptTemplate from langchain_core.prompts to define prompts with variables by specifying a template string with placeholders and a list of input variables. Then call format with variable values to generate the final prompt content.

PREREQUISITES

  • Python 3.8+
  • OpenAI API key (free tier works)
  • pip install langchain_openai langchain_core

Setup

Install the required LangChain packages and set your OpenAI API key as an environment variable.

  • Install LangChain OpenAI and core prompt packages:
bash
pip install langchain_openai langchain_core

Step by step

Define a prompt template with variables using ChatPromptTemplate, then format it with actual values to create a prompt for the model.

python
from langchain_core.prompts import ChatPromptTemplate
from langchain_openai import ChatOpenAI
import os

# Define a prompt template with variables
prompt_template = ChatPromptTemplate.from_template(
    "Translate the following text to {language}: {text}"
)

# Format the prompt with variable values
formatted_prompt = prompt_template.format(language="French", text="Hello, how are you?")

# Initialize the OpenAI chat client
client = ChatOpenAI(model_name="gpt-4o", openai_api_key=os.environ["OPENAI_API_KEY"])

# Send the formatted prompt to the model
response = client.chat([{"role": "user", "content": formatted_prompt}])

print("Prompt sent to model:", formatted_prompt)
print("Model response:", response.content)
output
Prompt sent to model: Translate the following text to French: Hello, how are you?
Model response: Bonjour, comment ça va ?

Common variations

You can use multiple variables in the template, async calls, or different models. For example, use ChatOpenAI with model_name="gpt-4o-mini" or format prompts dynamically in loops.

python
from langchain_core.prompts import ChatPromptTemplate
from langchain_openai import ChatOpenAI
import os

# Template with multiple variables
prompt_template = ChatPromptTemplate.from_template(
    "Summarize the article titled '{title}' in {language}."
)

# Format with variables
prompt = prompt_template.format(title="AI advancements", language="Spanish")

# Async example (requires async environment)
# async def main():
#     client = ChatOpenAI(model_name="gpt-4o-mini", openai_api_key=os.environ["OPENAI_API_KEY"])
#     response = await client.chat.acreate([{"role": "user", "content": prompt}])
#     print(response.content)

# Use different model
client = ChatOpenAI(model_name="gpt-4o-mini", openai_api_key=os.environ["OPENAI_API_KEY"])
response = client.chat([{"role": "user", "content": prompt}])
print(response.content)
output
El artículo titulado 'AI advancements' trata sobre los últimos avances en inteligencia artificial.

Troubleshooting

  • If you see KeyError when formatting, ensure all variables in the template are provided in format().
  • If the prompt is not sent correctly, verify your API key is set in os.environ["OPENAI_API_KEY"].
  • For model errors, confirm you are using a current model name like gpt-4o or gpt-4o-mini.

Key Takeaways

  • Use ChatPromptTemplate.from_template to create prompts with variables for dynamic content.
  • Always provide all required variables when calling format() to avoid errors.
  • Integrate formatted prompts with ChatOpenAI for seamless AI interactions.
  • You can switch models or use async calls with the same prompt formatting approach.
  • Set your API key securely in environment variables to avoid authentication issues.
Verified 2026-04 · gpt-4o, gpt-4o-mini
Verify ↗