How to save OpenAI conversation history in python
Quick answer
To save OpenAI conversation history in Python, maintain a list of message dictionaries representing the chat turns and append each user and assistant message. Use the
openai SDK to send this list as the messages parameter, and serialize the list to a file (e.g., JSON) to persist the conversation.PREREQUISITES
Python 3.8+OpenAI API key (free tier works)pip install openai>=1.0
Setup
Install the official OpenAI Python SDK and set your API key as an environment variable.
- Run
pip install openaito install the SDK. - Set your API key in your environment:
export OPENAI_API_KEY='your_api_key'(Linux/macOS) orsetx OPENAI_API_KEY "your_api_key"(Windows).
pip install openai Step by step
This example shows how to maintain and save conversation history in a JSON file while interacting with the OpenAI gpt-4o chat model.
import os
import json
from openai import OpenAI
# Initialize OpenAI client
client = OpenAI(api_key=os.environ["OPENAI_API_KEY"])
# Load conversation history from file or start new
history_file = "conversation_history.json"
try:
with open(history_file, "r") as f:
conversation = json.load(f)
except FileNotFoundError:
conversation = []
# Add user message
user_message = "Hello, how are you?"
conversation.append({"role": "user", "content": user_message})
# Call OpenAI chat completion
response = client.chat.completions.create(
model="gpt-4o",
messages=conversation
)
# Extract assistant reply
assistant_message = response.choices[0].message.content
print("Assistant:", assistant_message)
# Append assistant reply to conversation
conversation.append({"role": "assistant", "content": assistant_message})
# Save updated conversation history
with open(history_file, "w") as f:
json.dump(conversation, f, indent=2) output
Assistant: I'm doing well, thank you! How can I assist you today?
Common variations
You can adapt this pattern for asynchronous calls, streaming responses, or different models like gpt-4o-mini. For async, use asyncio and await with the OpenAI client. For streaming, handle partial responses and append them before saving.
import os
import json
import asyncio
from openai import OpenAI
async def chat_async():
client = OpenAI(api_key=os.environ["OPENAI_API_KEY"])
conversation = [{"role": "user", "content": "Hello asynchronously!"}]
response = await client.chat.completions.acreate(
model="gpt-4o-mini",
messages=conversation
)
assistant_message = response.choices[0].message.content
print("Assistant async:", assistant_message)
asyncio.run(chat_async()) output
Assistant async: Hello! How can I help you today?
Troubleshooting
- If you get a
FileNotFoundError, ensure the conversation history file path is correct or initialize an empty list. - If the API returns an error, verify your API key is set correctly in
os.environ. - For large conversations, consider truncating or summarizing history to stay within token limits.
Key Takeaways
- Maintain conversation history as a list of message dicts with roles and content.
- Serialize conversation history to JSON files for persistent storage.
- Always append both user and assistant messages to keep context.
- Use environment variables for API keys to keep credentials secure.
- Adapt saving logic for async or streaming use cases as needed.