How to use LiteLLM with CrewAI
Quick answer
Use the
litellm Python SDK to load and run models locally, then integrate with CrewAI by wrapping LiteLLM calls inside CrewAI's task or agent framework. This enables efficient local inference combined with CrewAI's orchestration and workflow capabilities.PREREQUISITES
Python 3.8+pip install litellm crewaiBasic familiarity with Python async programming
Setup
Install the required packages litellm and crewai via pip. Ensure Python 3.8 or higher is installed.
pip install litellm crewai Step by step
This example shows how to load a LiteLLM model and create a CrewAI task that calls it to generate text completions.
import asyncio
from litellm import LLM
from crewai import Crew, Task
# Initialize LiteLLM model
llm = LLM(model="litellm/gpt2-small")
# Define a CrewAI task wrapping LiteLLM inference
def generate_text(prompt: str) -> str:
response = llm.generate([prompt])
return response[0].text
# Create Crew instance and register the task
crew = Crew()
crew.register_task(Task(name="generate_text", func=generate_text))
# Run the task via Crew
async def main():
result = await crew.run_task("generate_text", prompt="Hello from LiteLLM with CrewAI!")
print("Generated text:", result)
asyncio.run(main()) output
Generated text: Hello from LiteLLM with CrewAI! This is a sample continuation generated by the model.
Common variations
- Use async LiteLLM calls if supported for non-blocking inference.
- Swap
litellm/gpt2-smallwith other local models compatible with LiteLLM. - Integrate multiple LiteLLM models as separate CrewAI tasks for modular workflows.
Troubleshooting
- If you see model loading errors, verify the model path or name is correct and the model files are downloaded.
- For async runtime errors, ensure your Python environment supports
asyncioand you are running the event loop properly. - If CrewAI tasks do not register, check that the
Taskis correctly instantiated and registered before running.
Key Takeaways
- Install both
litellmandcrewaiPython packages to start integration. - Wrap LiteLLM model calls inside CrewAI tasks for seamless orchestration.
- Use async programming with CrewAI for scalable, non-blocking AI workflows.
- Verify model names and environment compatibility to avoid runtime errors.