How to implement agent state management
client.chat.completions.create with context passed in the messages array to maintain state across calls.PREREQUISITES
Python 3.8+OpenAI API key (free tier works)pip install openai>=1.0
Setup
Install the OpenAI Python SDK and set your API key as an environment variable to securely authenticate requests.
pip install openai>=1.0 Step by step
Maintain agent state by storing conversation history in a list and passing it with each API call to preserve context. This example demonstrates a simple chat agent that remembers previous messages.
import os
from openai import OpenAI
client = OpenAI(api_key=os.environ["OPENAI_API_KEY"])
# Initialize conversation state
conversation_history = [
{"role": "system", "content": "You are a helpful assistant."}
]
def chat_with_agent(user_input):
# Append user message to history
conversation_history.append({"role": "user", "content": user_input})
# Call the chat completion API with full conversation history
response = client.chat.completions.create(
model="gpt-4o",
messages=conversation_history
)
# Extract assistant reply
assistant_message = response.choices[0].message.content
# Append assistant reply to history
conversation_history.append({"role": "assistant", "content": assistant_message})
return assistant_message
# Example usage
print(chat_with_agent("Hello, who won the World Series in 2020?"))
print(chat_with_agent("Can you remind me what you just said?")) The Los Angeles Dodgers won the World Series in 2020. I said that the Los Angeles Dodgers won the World Series in 2020.
Common variations
You can implement agent state management using asynchronous calls, streaming responses, or by storing state externally in databases or vector stores for scalability. Different models like claude-3-5-sonnet-20241022 or gemini-1.5-pro can be used similarly by passing conversation history.
import os
import anthropic
client = anthropic.Anthropic(api_key=os.environ["ANTHROPIC_API_KEY"])
conversation_history = []
system_prompt = "You are a helpful assistant."
conversation_history.append({"role": "system", "content": system_prompt})
user_input = "Hello, how are you?"
conversation_history.append({"role": "user", "content": user_input})
message = client.messages.create(
model="claude-3-5-sonnet-20241022",
max_tokens=500,
system=system_prompt,
messages=conversation_history
)
assistant_reply = message.content[0].text
conversation_history.append({"role": "assistant", "content": assistant_reply})
print(assistant_reply) I'm doing well, thank you! How can I assist you today?
Troubleshooting
If the agent forgets context, ensure the full conversation history is passed with each API call and that the token limit is not exceeded. For large histories, summarize or truncate older messages. Also, verify your API key is correctly set in os.environ.
Key Takeaways
- Maintain agent state by storing conversation history and passing it with each API call.
- Use environment variables for API keys and the latest SDK client patterns for security and compatibility.
- For long conversations, manage token limits by summarizing or truncating history.
- Agent state can be stored in-memory, databases, or vector stores depending on scale and persistence needs.
- Different LLM providers and models support similar state management via message history.