How to create dynamic prompts in LangChain
Quick answer
Use
ChatPromptTemplate from langchain_core.prompts to define prompt templates with variables, then call format to dynamically inject values. Combine with ChatOpenAI to send these dynamic prompts to models like gpt-4o.PREREQUISITES
Python 3.8+OpenAI API key (free tier works)pip install langchain_openai>=0.2 openai>=1.0
Setup
Install the required packages and set your OpenAI API key in the environment.
- Install LangChain OpenAI integration and OpenAI SDK:
pip install langchain_openai openai Step by step
Define a prompt template with variables using ChatPromptTemplate, then format it dynamically and send it to the gpt-4o model using ChatOpenAI.
import os
from langchain.prompts import ChatPromptTemplate
from langchain_openai import ChatOpenAI
# Define a prompt template with placeholders
prompt_template = ChatPromptTemplate.from_template(
"Write a short story about a {adjective} {animal} who loves {activity}."
)
# Format the prompt dynamically
formatted_prompt = prompt_template.format(adjective="brave", animal="fox", activity="dancing")
# Initialize the OpenAI chat client
chat = ChatOpenAI(model_name="gpt-4o", openai_api_key=os.environ["OPENAI_API_KEY"])
# Send the dynamic prompt to the model
response = chat(formatted_prompt)
print("Prompt sent:", formatted_prompt)
print("Model response:", response.content) output
Prompt sent: Write a short story about a brave fox who loves dancing. Model response: Once upon a time, a brave fox named Felix loved to dance under the moonlight...
Common variations
You can create dynamic prompts with multiple variables, use async calls, or switch models easily.
- Use
format_promptfor more complex prompt objects. - Switch to
gpt-4o-minifor faster, cheaper calls. - Use async methods with
await chat.acall()in async functions.
import asyncio
async def async_example():
prompt_template = ChatPromptTemplate.from_template(
"Describe a {color} sunset over the {place}."
)
prompt = prompt_template.format(color="golden", place="mountains")
chat = ChatOpenAI(model_name="gpt-4o-mini", openai_api_key=os.environ["OPENAI_API_KEY"])
response = await chat.acall(prompt)
print("Async response:", response.content)
asyncio.run(async_example()) output
Async response: The golden sunset painted the mountains with hues of orange and pink, creating a breathtaking view.
Troubleshooting
If you get errors like KeyError when formatting prompts, ensure all variables in the template are provided in format. If API calls fail, verify your OPENAI_API_KEY is set correctly in the environment.
Key Takeaways
- Use
ChatPromptTemplateto create reusable prompt templates with variables. - Call
formaton the template to inject dynamic values before sending to the model. - Combine with
ChatOpenAIto send dynamic prompts togpt-4oor other models. - Async calls and different models like
gpt-4o-miniare supported for flexibility. - Always provide all required variables to avoid formatting errors and set your API key in
os.environ.