How to use ConversationChain in LangChain
Quick answer
Use
ConversationChain from langchain.chains to build conversational AI flows by combining an LLM and a memory component. Initialize it with a chat model like ChatOpenAI and a memory such as ConversationBufferMemory to maintain context across turns.PREREQUISITES
Python 3.8+OpenAI API key (free tier works)pip install langchain_openai>=0.2.0
Setup
Install the required packages and set your OpenAI API key as an environment variable.
pip install langchain_openai Step by step
This example shows how to create a ConversationChain with ChatOpenAI and ConversationBufferMemory to maintain chat history.
import os
from langchain_openai import ChatOpenAI
from langchain.chains import ConversationChain
from langchain.memory import ConversationBufferMemory
# Initialize chat model
chat_model = ChatOpenAI(model="gpt-4o-mini", temperature=0.7)
# Initialize memory to store conversation history
memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True)
# Create ConversationChain with model and memory
conversation = ConversationChain(llm=chat_model, memory=memory)
# Run conversation with user input
response1 = conversation.invoke({"input": "Hello, who won the World Series in 2023?"})
print("Bot:", response1)
# Continue conversation, memory keeps context
response2 = conversation.invoke({"input": "Where was it played?"})
print("Bot:", response2) output
Bot: The Texas Rangers won the 2023 World Series. Bot: The 2023 World Series was played at Globe Life Field in Arlington, Texas.
Common variations
- Use different chat models like
gpt-4oorgpt-4o-minifor cost/performance tradeoffs. - Use
ConversationSummaryMemoryfor long conversations to summarize context. - Run asynchronously with
await conversation.ainvoke({"input": "..."})if using async models.
Troubleshooting
- If you get authentication errors, ensure
OPENAI_API_KEYis set correctly in your environment. - If context is not remembered, verify you passed
memorytoConversationChain. - For unexpected responses, adjust
temperatureor switch to a more capable model.
Key Takeaways
- Use
ConversationChainwithChatOpenAIand memory to maintain chat context. - Choose memory types like
ConversationBufferMemoryorConversationSummaryMemorybased on conversation length. - Adjust model and temperature parameters for desired response style and cost.
- Always set your API key via environment variables for security.
- Async invocation is available for async-compatible chat models.