How to beginner · 3 min read

How to use AgentOps with CrewAI

Quick answer
Use agentops.init() to enable automatic observability for AI agents including those built with CrewAI. Initialize agentops with your API key, then run your CrewAI agents normally; all LLM calls will be tracked automatically.

PREREQUISITES

  • Python 3.8+
  • AgentOps API key
  • CrewAI installed
  • pip install agentops crewai

Setup

Install the required packages and set your environment variables for AGENTOPS_API_KEY. This enables agentops to automatically track your AI agent's calls.

bash
pip install agentops crewai
output
Collecting agentops
Collecting crewai
Successfully installed agentops-1.0.0 crewai-1.0.0

Step by step

Initialize agentops in your Python script before running your CrewAI agent. This setup automatically tracks all LLM calls made by CrewAI without additional instrumentation.

python
import os
import agentops
from crewai import Agent

# Initialize AgentOps with your API key from environment
agentops.init(api_key=os.environ["AGENTOPS_API_KEY"])

# Create a CrewAI agent with a simple prompt
agent = Agent(model="openai:gpt-4o-mini", system_prompt="You are a helpful assistant.")

# Run the agent synchronously
response = agent.run_sync("What is RAG?")
print("Agent response:", response.data.answer)
output
Agent response: Retrieval-Augmented Generation (RAG) is a technique that combines retrieval of documents with generative models to improve accuracy and context.

Common variations

You can use agentops with asynchronous CrewAI agents by calling await agent.run(). For streaming or advanced telemetry, agentops automatically captures relevant events without extra code.

AgentOps supports multiple LLM providers; just initialize agentops once and use any CrewAI-compatible model string.

python
import asyncio
import os
import agentops
from crewai import Agent

agentops.init(api_key=os.environ["AGENTOPS_API_KEY"])

async def main():
    agent = Agent(model="openai:gpt-4o-mini", system_prompt="You are a helpful assistant.")
    response = await agent.run("Explain RAG in simple terms.")
    print("Async agent response:", response.data.answer)

asyncio.run(main())
output
Async agent response: Retrieval-Augmented Generation (RAG) combines document retrieval with language models to provide accurate and context-aware answers.

Troubleshooting

  • If you do not see telemetry data in your AgentOps dashboard, verify that AGENTOPS_API_KEY is set correctly in your environment.
  • Ensure agentops.init() is called before any CrewAI agent runs.
  • For network issues, check your firewall or proxy settings that might block telemetry uploads.

Key Takeaways

  • Initialize agentops early to enable automatic AI agent observability.
  • AgentOps tracks all CrewAI LLM calls without manual instrumentation.
  • Supports synchronous and asynchronous CrewAI agent usage seamlessly.
Verified 2026-04 · openai:gpt-4o-mini
Verify ↗