AgentOps LLM call monitoring
Quick answer
Use
agentops.init() to automatically track all LLM calls in your Python application. For manual session control, start sessions with agentops.start_session() and end them with agentops.end_session() to monitor and log detailed LLM usage.PREREQUISITES
Python 3.8+AgentOps API keypip install agentopsOpenAI API key (if using OpenAI LLMs)
Setup
Install the agentops Python package and set your API keys as environment variables. This enables automatic instrumentation of LLM calls.
- Install AgentOps SDK:
pip install agentops - Set environment variables:
export AGENTOPS_API_KEY=<your_agentops_api_key>export OPENAI_API_KEY=<your_openai_api_key>(if using OpenAI)
pip install agentops output
Collecting agentops Downloading agentops-1.0.0-py3-none-any.whl (10 kB) Installing collected packages: agentops Successfully installed agentops-1.0.0
Step by step
This example shows how to initialize AgentOps to automatically monitor OpenAI LLM calls and how to manually start and end a session for detailed tracking.
import os
import agentops
from openai import OpenAI
# Initialize AgentOps with your API key from environment
agentops.init(api_key=os.environ["AGENTOPS_API_KEY"])
# Create OpenAI client
client = OpenAI(api_key=os.environ["OPENAI_API_KEY"])
# Optional: start a manual session with tags
session = agentops.start_session(tags=["example-session"])
# Make an LLM call (automatically tracked)
response = client.chat.completions.create(
model="gpt-4o-mini",
messages=[{"role": "user", "content": "Hello, AgentOps!"}]
)
print("LLM response:", response.choices[0].message.content)
# End the manual session
agentops.end_session("Completed successfully") output
LLM response: Hello, AgentOps!
Common variations
You can use AgentOps with other LLM providers by initializing their clients normally; AgentOps auto-instruments OpenAI SDK calls. For async usage, call agentops.init() before async calls. AgentOps also supports automatic tracing without manual session control.
import asyncio
import os
import agentops
from openai import OpenAI
agentops.init(api_key=os.environ["AGENTOPS_API_KEY"])
client = OpenAI(api_key=os.environ["OPENAI_API_KEY"])
async def async_llm_call():
response = await client.chat.completions.create(
model="gpt-4o-mini",
messages=[{"role": "user", "content": "Async call with AgentOps"}]
)
print("Async LLM response:", response.choices[0].message.content)
asyncio.run(async_llm_call()) output
Async LLM response: Async call with AgentOps
Troubleshooting
- If LLM calls are not tracked, ensure
agentops.init()is called before any LLM client usage. - Verify your
AGENTOPS_API_KEYenvironment variable is set correctly. - For missing logs, check network connectivity to AgentOps backend.
Key Takeaways
- Call
agentops.init()early to auto-instrument LLM calls. - Use
agentops.start_session()andagentops.end_session()for manual session tracking. - AgentOps supports async and sync LLM calls with automatic tracing.
- Set
AGENTOPS_API_KEYin your environment to authenticate. - Check connectivity and initialization order if monitoring fails.