How to store chatbot memory in database
Quick answer
Store chatbot memory by saving conversation messages or embeddings in a database like SQLite or PostgreSQL. Use Python to append new messages and retrieve past context to maintain conversation continuity with chat.completions.create calls.
PREREQUISITES
Python 3.8+OpenAI API key (free tier works)pip install openai>=1.0pip install sqlalchemy
Setup
Install required packages and set your environment variable for the OpenAI API key.
- Install OpenAI SDK and SQLAlchemy for database ORM:
pip install openai sqlalchemy Step by step
This example demonstrates storing chatbot messages in a SQLite database using SQLAlchemy. It saves user and assistant messages, retrieves conversation history, and sends it to the gpt-4o model to maintain memory.
import os
from openai import OpenAI
from sqlalchemy import create_engine, Column, Integer, String, Text
from sqlalchemy.orm import declarative_base, sessionmaker
# Initialize OpenAI client
client = OpenAI(api_key=os.environ["OPENAI_API_KEY"])
# Setup SQLite database with SQLAlchemy
Base = declarative_base()
class Message(Base):
__tablename__ = "messages"
id = Column(Integer, primary_key=True)
role = Column(String(10)) # 'user' or 'assistant'
content = Column(Text)
engine = create_engine("sqlite:///chat_memory.db")
Base.metadata.create_all(engine)
Session = sessionmaker(bind=engine)
# Function to save message
def save_message(role: str, content: str):
session = Session()
msg = Message(role=role, content=content)
session.add(msg)
session.commit()
session.close()
# Function to load conversation history
def load_conversation():
session = Session()
messages = session.query(Message).order_by(Message.id).all()
session.close()
return [{"role": m.role, "content": m.content} for m in messages]
# Example usage
if __name__ == "__main__":
user_input = "Hello, how are you?"
save_message("user", user_input)
conversation = load_conversation()
response = client.chat.completions.create(
model="gpt-4o",
messages=conversation
)
assistant_reply = response.choices[0].message.content
print("Assistant:", assistant_reply)
save_message("assistant", assistant_reply) output
Assistant: I'm doing well, thank you! How can I assist you today?
Common variations
You can adapt this pattern for other databases like PostgreSQL by changing the connection string in create_engine. For async frameworks, use async database clients and async OpenAI calls. You can also store vector embeddings for semantic memory retrieval instead of raw messages.
from openai import OpenAI
import asyncio
client = OpenAI(api_key=os.environ["OPENAI_API_KEY"])
async def async_chat():
messages = [{"role": "user", "content": "Hello"}]
response = await client.chat.completions.acreate(
model="gpt-4o",
messages=messages
)
print(response.choices[0].message.content)
if __name__ == "__main__":
asyncio.run(async_chat()) output
Hello! How can I help you today?
Troubleshooting
- If conversation context is too long, truncate older messages to fit token limits.
- Ensure database sessions are properly closed to avoid locks.
- Check your
OPENAI_API_KEYenvironment variable is set correctly. - For large-scale apps, consider using vector databases for efficient memory retrieval.
Key Takeaways
- Store chatbot memory as conversation messages in a database to maintain context across sessions.
- Use SQLAlchemy with SQLite or other databases for easy message persistence in Python.
- Retrieve and send full conversation history to the chat model to preserve memory.
- For async or large-scale use, adapt to async DB clients or vector embeddings for semantic memory.
- Always manage token limits by truncating old messages to avoid API errors.