How to beginner · 3 min read

How to add memory to LangChain chain

Quick answer
To add memory to a LangChain chain, instantiate a memory class like ConversationBufferMemory and pass it to the chain constructor via the memory parameter. This enables the chain to retain conversational context across calls, enhancing stateful AI interactions.

PREREQUISITES

  • Python 3.8+
  • OpenAI API key (free tier works)
  • pip install langchain_openai langchain_community

Setup

Install the necessary LangChain packages and set your OpenAI API key as an environment variable.

  • Run pip install langchain_openai langchain_community
  • Set your API key in your shell: export OPENAI_API_KEY='your_api_key' (Linux/macOS) or setx OPENAI_API_KEY "your_api_key" (Windows)
bash
pip install langchain_openai langchain_community

Step by step

This example shows how to add ConversationBufferMemory to a LangChain LLMChain to maintain conversational context.

python
import os
from langchain_openai import ChatOpenAI
from langchain.chains import LLMChain
from langchain.prompts import ChatPromptTemplate, HumanMessagePromptTemplate
from langchain.memory import ConversationBufferMemory

# Initialize the chat model
chat = ChatOpenAI(model_name="gpt-4o", temperature=0, openai_api_key=os.environ["OPENAI_API_KEY"])

# Define a prompt template
prompt = ChatPromptTemplate.from_messages([
    HumanMessagePromptTemplate.from_template("You are a helpful assistant. {history} Human: {input} AI:")
])

# Initialize memory to store conversation history
memory = ConversationBufferMemory(memory_key="history", return_messages=True)

# Create the chain with memory
chain = LLMChain(llm=chat, prompt=prompt, memory=memory)

# Run the chain multiple times to see memory in action
print(chain.run(input="Hello!"))
print(chain.run(input="Can you remind me what I said earlier?"))
output
Hello! How can I assist you today?
You said hello earlier. How can I help you further?

Common variations

You can use different memory types like ConversationSummaryMemory for summarized context or ConversationEntityMemory for entity tracking. Async chains and other LLMs like OpenAI or Anthropic models are also supported.

python
from langchain.memory import ConversationSummaryMemory

# Using summary memory instead of buffer
summary_memory = ConversationSummaryMemory(llm=chat, memory_key="history")

chain_with_summary = LLMChain(llm=chat, prompt=prompt, memory=summary_memory)

print(chain_with_summary.run(input="Hello!"))
print(chain_with_summary.run(input="What did I say before?") )
output
Hello! How can I assist you today?
Previously, you greeted me with 'Hello!'. How can I help you now?

Troubleshooting

  • If memory seems empty, ensure memory_key matches the placeholder in your prompt template.
  • Check your environment variable OPENAI_API_KEY is set correctly.
  • For long conversations, use summary memory to avoid token limits.

Key Takeaways

  • Use ConversationBufferMemory to keep full chat history in LangChain chains.
  • Pass the memory instance to the chain constructor via the memory parameter.
  • Match memory_key in memory with your prompt template placeholders for context injection.
  • Switch to summary or entity memory for efficient long-term context management.
  • Always set your API key in os.environ and use current LangChain imports.
Verified 2026-04 · gpt-4o, ChatOpenAI
Verify ↗