How to install CrewAI in python
Direct answer
Install CrewAI in Python using
pip install crewai and import it with import crewai to start using its API.Setup
Install
pip install crewai Env vars
CREWAI_API_KEY Imports
import os
import crewai Examples
inInitialize CrewAI client and get a simple completion for 'Hello, world!'
outResponse: Hello, world! How can I assist you today?
inUse CrewAI to generate a short summary of a text about AI advancements.
outResponse: AI advancements have accelerated rapidly, enabling smarter applications across industries.
inCall CrewAI with an empty prompt to test error handling.
outError: Prompt cannot be empty. Please provide valid input.
Integration steps
- Install the CrewAI Python package using pip.
- Set your API key in the environment variable CREWAI_API_KEY.
- Import the crewai module in your Python script.
- Initialize the CrewAI client with the API key from os.environ.
- Call the appropriate CrewAI method with your input prompt.
- Extract and use the response text from the API call.
Full code
import os
import crewai
# Initialize CrewAI client with API key from environment
client = crewai.Client(api_key=os.environ['CREWAI_API_KEY'])
# Define a prompt
prompt = "Hello, CrewAI!"
# Call the completion method
response = client.messages.create(prompt=prompt, max_tokens=50)
# Print the response text
print("Response:", response.text) output
Response: Hello, CrewAI! How can I assist you today?
API trace
Request
{"prompt": "Hello, CrewAI!", "max_tokens": 50} Response
{"text": "Hello, CrewAI! How can I assist you today?", "usage": {"tokens": 15}} Extract
response.textVariants
Streaming response version ›
Use streaming to display partial results immediately for long responses or better user experience.
import os
import crewai
client = crewai.Client(api_key=os.environ['CREWAI_API_KEY'])
prompt = "Tell me a story about AI."
for chunk in client.completions.stream(prompt=prompt, max_tokens=100):
print(chunk.text, end='', flush=True)
print() Async version ›
Use async calls to handle multiple concurrent requests efficiently in an async Python environment.
import os
import asyncio
import crewai
async def main():
client = crewai.Client(api_key=os.environ['CREWAI_API_KEY'])
prompt = "Explain quantum computing in simple terms."
response = await client.completions.acreate(prompt=prompt, max_tokens=60)
print("Response:", response.text)
asyncio.run(main()) Alternative model usage ›
Use a different model variant for improved accuracy or specialized tasks.
import os
import crewai
client = crewai.Client(api_key=os.environ['CREWAI_API_KEY'])
prompt = "Summarize the latest AI trends."
response = client.messages.create(prompt=prompt, max_tokens=50, model="crewai-advanced-v2")
print("Response:", response.text) Performance
Latency~500ms for typical completion calls
Cost~$0.0015 per 100 tokens generated
Rate limitsDefault tier: 300 requests per minute, 50,000 tokens per minute
- Use concise prompts to reduce token usage.
- Limit <code>max_tokens</code> to the minimum needed for your task.
- Reuse context when possible to avoid repeated tokens.
| Approach | Latency | Cost/call | Best for |
|---|---|---|---|
| Standard completion | ~500ms | ~$0.0015 | General purpose text generation |
| Streaming completion | ~300ms initial + streaming | ~$0.0015 | Long outputs with better UX |
| Async completion | ~500ms | ~$0.0015 | Concurrent requests in async apps |
Quick tip
Always set your CrewAI API key in the environment variable <code>CREWAI_API_KEY</code> to keep credentials secure.
Common mistake
Beginners often forget to set the <code>CREWAI_API_KEY</code> environment variable, causing authentication errors.