How to persist OpenAI thread across sessions
Quick answer
To persist an OpenAI chat thread across sessions, save the conversation's
messages array locally (e.g., in a file or database) after each API call. Reload this messages history in subsequent sessions and pass it back to client.chat.completions.create() to maintain context.PREREQUISITES
Python 3.8+OpenAI API key (free tier works)pip install openai>=1.0
Setup
Install the official OpenAI Python SDK and set your API key as an environment variable.
- Install SDK:
pip install openai - Set environment variable in your shell:
export OPENAI_API_KEY='your_api_key'
pip install openai Step by step
This example demonstrates saving the chat messages list to a JSON file after each interaction and reloading it to continue the conversation seamlessly across sessions.
import os
import json
from openai import OpenAI
client = OpenAI(api_key=os.environ["OPENAI_API_KEY"])
# File to persist chat history
HISTORY_FILE = "chat_history.json"
# Load previous messages or start fresh
try:
with open(HISTORY_FILE, "r", encoding="utf-8") as f:
messages = json.load(f)
except FileNotFoundError:
messages = []
# Append user message
user_input = "Hello, how are you?"
messages.append({"role": "user", "content": user_input})
# Call OpenAI chat completion with full message history
response = client.chat.completions.create(
model="gpt-4o-mini",
messages=messages
)
# Extract assistant reply
assistant_reply = response.choices[0].message.content
print("Assistant:", assistant_reply)
# Append assistant reply to messages
messages.append({"role": "assistant", "content": assistant_reply})
# Save updated messages to file
with open(HISTORY_FILE, "w", encoding="utf-8") as f:
json.dump(messages, f, ensure_ascii=False, indent=2) output
Assistant: I'm doing well, thank you! How can I assist you today?
Common variations
- Async usage: Use an async OpenAI client and async file I/O libraries to persist messages.
- Different storage: Persist messages in a database like SQLite or Redis instead of a JSON file.
- Model choice: Use
gpt-4o-minior other OpenAI chat models by changing themodelparameter.
Troubleshooting
- If the conversation context is lost, ensure you reload the full
messageshistory before each API call. - If the JSON file is corrupted, delete it to start a fresh conversation.
- Watch for token limits; truncate older messages if the conversation history grows too large.
Key Takeaways
- Persist the entire
messageslist to maintain conversation context across sessions. - Reload saved messages before each API call to continue the chat thread seamlessly.
- Use JSON files or databases to store chat history depending on your app's scale.
- Monitor token limits and prune old messages to avoid exceeding model context windows.