LangChain chatbot with message history
Quick answer
Use
LangChain with the ChatOpenAI class to build a chatbot that maintains message history by storing and passing previous messages in a ConversationBufferMemory. This enables context-aware conversations with models like gpt-4o.PREREQUISITES
Python 3.8+OpenAI API key (free tier works)pip install langchain_openai langchain_community
Setup
Install the required packages and set your OpenAI API key as an environment variable.
- Install LangChain OpenAI and community packages:
pip install langchain_openai langchain_community Step by step
This example shows how to create a LangChain chatbot with message history using ConversationBufferMemory to keep track of the conversation context.
import os
from langchain_openai import ChatOpenAI
from langchain_community.memory import ConversationBufferMemory
from langchain_core.chains import ConversationChain
# Initialize the chat model
chat = ChatOpenAI(model="gpt-4o", temperature=0.7, api_key=os.environ["OPENAI_API_KEY"])
# Set up memory to store conversation history
memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True)
# Create the conversation chain with memory
conversation = ConversationChain(llm=chat, memory=memory)
# Simulate a chat session
print("User: Hello, who won the world series in 2023?")
response1 = conversation.invoke({"input": "Hello, who won the world series in 2023?"})
print("Bot:", response1["response"])
print("User: Where was it played?")
response2 = conversation.invoke({"input": "Where was it played?"})
print("Bot:", response2["response"]) output
User: Hello, who won the world series in 2023? Bot: The Texas Rangers won the 2023 World Series. User: Where was it played? Bot: The 2023 World Series was played at Globe Life Field in Arlington, Texas.
Common variations
You can customize your chatbot by:
- Using
ChatAnthropicfor Anthropic Claude models. - Switching to async calls with
conversation.invoke_async(). - Using different memory types like
ConversationSummaryMemoryfor summarizing long chats. - Changing the model to
gpt-4o-minifor faster, cheaper responses.
from langchain_anthropic import ChatAnthropic
import asyncio
# Async example with Anthropic Claude
async def async_chat():
chat = ChatAnthropic(model="claude-3-5-sonnet-20241022", api_key=os.environ["ANTHROPIC_API_KEY"])
memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True)
conversation = ConversationChain(llm=chat, memory=memory)
response = await conversation.invoke_async({"input": "Hello!"})
print(response["response"])
asyncio.run(async_chat()) output
Hello! How can I assist you today?
Troubleshooting
- If you get an authentication error, verify your
OPENAI_API_KEYenvironment variable is set correctly. - If the chatbot forgets context, ensure
ConversationBufferMemoryis properly passed to theConversationChain. - For rate limits, reduce
max_tokensor switch to a smaller model likegpt-4o-mini.
Key Takeaways
- Use
ConversationBufferMemoryin LangChain to maintain chat history for context. - Pass the memory object to
ConversationChainto enable stateful conversations. - You can switch models or SDKs easily while keeping the same memory pattern.
- Async invocation is supported for scalable chatbot applications.
- Always set your API key securely via environment variables.