How to orchestrate multiple AI agents
Quick answer
To orchestrate multiple AI agents, use a central controller program that manages communication and task delegation between agents via API calls or message passing. Implement each agent as an independent LLM instance (e.g.,
gpt-4o, claude-3-5-sonnet-20241022) and coordinate their inputs and outputs to achieve complex workflows.PREREQUISITES
Python 3.8+OpenAI API key (free tier works)pip install openai>=1.0
Setup
Install the OpenAI Python SDK and set your API key as an environment variable to securely authenticate requests.
pip install openai Step by step
This example demonstrates orchestrating two AI agents: one generates a question, and the other answers it. The controller coordinates their interaction.
import os
from openai import OpenAI
client = OpenAI(api_key=os.environ["OPENAI_API_KEY"])
# Agent 1: Generate a question
question_response = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "Generate an interesting trivia question."}]
)
question = question_response.choices[0].message.content
print(f"Agent 1 (Question): {question}")
# Agent 2: Answer the question
answer_response = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": f"Answer this question: {question}"}]
)
answer = answer_response.choices[0].message.content
print(f"Agent 2 (Answer): {answer}") output
Agent 1 (Question): What is the tallest mountain in the solar system? Agent 2 (Answer): The tallest mountain in the solar system is Olympus Mons on Mars.
Common variations
You can extend orchestration by adding more agents, using asynchronous calls for parallelism, or integrating different models like claude-3-5-sonnet-20241022 for specialized tasks. Streaming responses can improve responsiveness in interactive setups.
import asyncio
import os
from openai import OpenAI
client = OpenAI(api_key=os.environ["OPENAI_API_KEY"])
async def agent_generate():
response = await client.chat.completions.acreate(
model="gpt-4o",
messages=[{"role": "user", "content": "Generate a creative story prompt."}]
)
return response.choices[0].message.content
async def agent_expand(prompt):
response = await client.chat.completions.acreate(
model="claude-3-5-sonnet-20241022",
messages=[{"role": "user", "content": f"Expand this prompt into a short story: {prompt}"}]
)
return response.choices[0].message.content
async def main():
prompt = await agent_generate()
print(f"Agent 1 prompt: {prompt}")
story = await agent_expand(prompt)
print(f"Agent 2 story: {story}")
asyncio.run(main()) output
Agent 1 prompt: A mysterious door appears in the middle of a bustling city. Agent 2 story: In the heart of the city, a door materialized overnight, shimmering with an otherworldly glow... (story continues)
Troubleshooting
- If you see rate limit errors, implement exponential backoff retries or reduce request frequency.
- Ensure environment variables are correctly set to avoid authentication failures.
- Check model availability and update model names as APIs evolve.
Key Takeaways
- Use a central controller to manage communication between multiple AI agents via API calls.
- Leverage different models for specialized tasks to build collaborative AI workflows.
- Implement async calls to run agents in parallel and improve efficiency.
- Handle API rate limits and authentication errors proactively.
- Keep model names and SDK usage up to date with provider documentation.