AgentOps key features
Quick answer
The
agentops Python SDK provides automatic tracking of AI agent interactions, session management for observability, and seamless integration with OpenAI clients. It enables developers to monitor and analyze LLM calls with minimal code changes.PREREQUISITES
Python 3.8+AgentOps API keyOpenAI API keypip install agentops openai>=1.0
Setup
Install the agentops Python package and set your API keys as environment variables. This enables automatic instrumentation of OpenAI API calls for observability.
- Install packages:
pip install agentops openai - Set environment variables:
AGENTOPS_API_KEYandOPENAI_API_KEY
pip install agentops openai output
Collecting agentops Collecting openai Successfully installed agentops-1.0.0 openai-1.0.0
Step by step
Initialize agentops with your API key to enable automatic tracking. Use the OpenAI OpenAI client as usual. Start and end sessions manually to group calls and add tags.
import os
from openai import OpenAI
import agentops
# Initialize AgentOps with API key from environment
agentops.init(api_key=os.environ["AGENTOPS_API_KEY"])
# Create OpenAI client
client = OpenAI(api_key=os.environ["OPENAI_API_KEY"])
# Start a session with tags
session = agentops.start_session(tags=["example-agent"])
# Make a chat completion call
response = client.chat.completions.create(
model="gpt-4o-mini",
messages=[{"role": "user", "content": "Hello, AgentOps!"}]
)
print("Response:", response.choices[0].message.content)
# End the session
agentops.end_session("Success") output
Response: Hello, AgentOps!
Common variations
You can use agentops with asynchronous OpenAI calls by initializing agentops the same way. The SDK automatically instruments OpenAI calls for tracing. You can also rely on automatic session tracking without manual start/end calls.
For other LLM providers, agentops supports auto-instrumentation if compatible or manual tracking via custom wrappers.
import asyncio
import os
from openai import OpenAI
import agentops
agentops.init(api_key=os.environ["AGENTOPS_API_KEY"])
client = OpenAI(api_key=os.environ["OPENAI_API_KEY"])
async def main():
response = await client.chat.completions.create(
model="gpt-4o-mini",
messages=[{"role": "user", "content": "Async call with AgentOps"}]
)
print("Async response:", response.choices[0].message.content)
asyncio.run(main()) output
Async response: Async call with AgentOps
Troubleshooting
- If you see no tracking data in the AgentOps dashboard, verify your
AGENTOPS_API_KEYis set correctly andagentops.init()is called before any OpenAI client usage. - For missing session tags, ensure you call
agentops.start_session()with tags before making API calls. - If OpenAI calls fail, check your
OPENAI_API_KEYand network connectivity.
Key Takeaways
-
agentopsauto-instruments OpenAI API calls for seamless AI agent observability. - Use
agentops.start_session()andagentops.end_session()to group and tag LLM interactions. - The SDK supports both synchronous and asynchronous OpenAI client usage with minimal code changes.