How to use CrewAI with Ollama
Quick answer
To use
CrewAI with Ollama, you run Ollama models locally or on your server and interact with them via the ollama CLI or HTTP API. CrewAI can call these models by invoking the ollama command or HTTP endpoints, enabling seamless integration of Ollama's local AI models into CrewAI workflows.PREREQUISITES
Python 3.8+Ollama installed and configured (https://ollama.com)CrewAI account and API accesspip install requests
Setup Ollama and CrewAI
Install Ollama on your machine from https://ollama.com and ensure it is running. Install Python dependencies for making HTTP requests to Ollama's API or using its CLI. Also, have your CrewAI environment ready for integration.
pip install requests Step by step integration example
This example shows how to call an Ollama model from Python using the HTTP API and integrate the response into a CrewAI workflow.
import os
import requests
# Ollama local API endpoint
OLLAMA_API_URL = "http://localhost:11434"
MODEL_NAME = "llama2"
# Function to query Ollama model
def query_ollama(prompt: str) -> str:
url = f"{OLLAMA_API_URL}/v1/chat/completions"
headers = {"Content-Type": "application/json"}
data = {
"model": MODEL_NAME,
"messages": [{"role": "user", "content": prompt}]
}
response = requests.post(url, json=data, headers=headers)
response.raise_for_status()
return response.json()["choices"][0]["message"]["content"]
# Example usage
if __name__ == "__main__":
prompt = "Explain how CrewAI integrates with Ollama."
answer = query_ollama(prompt)
print("Ollama response:", answer)
# Here you would send 'answer' to CrewAI API or use it in your CrewAI workflow output
Ollama response: CrewAI can integrate with Ollama by invoking Ollama's local models via HTTP API or CLI, enabling seamless AI-powered workflows.
Common variations
- Use the
ollamaCLI directly from Python withsubprocessfor local model calls. - Switch Ollama models by changing the
MODEL_NAMEvariable. - Integrate asynchronously using
httpxoraiohttpfor non-blocking calls.
import subprocess
def query_ollama_cli(prompt: str) -> str:
result = subprocess.run(
["ollama", "chat", "llama2", prompt],
capture_output=True,
text=True
)
return result.stdout.strip()
if __name__ == "__main__":
print(query_ollama_cli("What is CrewAI?")) output
CrewAI is a platform that enables AI workflow automation and integration with models like Ollama.
Troubleshooting
- If you get connection errors, ensure Ollama daemon is running and listening on the correct port (default 11434).
- Check your firewall or network settings if HTTP requests to Ollama fail.
- Use
ollama listCLI command to verify installed models. - For permission issues with CLI calls, run your Python script with appropriate user privileges.
Key Takeaways
- Use Ollama's local HTTP API or CLI to integrate with CrewAI workflows.
- Switch Ollama models easily by changing the model parameter in API calls.
- Ensure Ollama daemon is running to avoid connection errors.
- Python's requests or subprocess modules enable flexible Ollama integration.
- CrewAI can leverage Ollama models for powerful local AI processing.