How to Intermediate · 3 min read

How to use GroupChatManager in AutoGen

Quick answer
Use GroupChatManager in AutoGen to coordinate multi-agent conversations by defining agents and their roles, then running the manager to handle message passing automatically. Initialize GroupChatManager with your agents and call run() to start the group chat flow.

PREREQUISITES

  • Python 3.8+
  • pip install autogen
  • OpenAI API key (set in environment variable OPENAI_API_KEY)

Setup

Install the autogen package and set your OpenAI API key in the environment variable OPENAI_API_KEY. This enables AutoGen to access OpenAI models for agent conversations.

bash
pip install autogen

Step by step

Define your agents with roles and models, create a GroupChatManager instance with these agents, then call run() to execute the multi-agent chat. The manager handles message passing and conversation flow automatically.

python
import os
from autogen import GroupChatManager, OpenAIChatAgent

# Initialize agents with roles and models
assistant = OpenAIChatAgent(
    name="assistant",
    system_prompt="You are a helpful assistant.",
    model="gpt-4o",
    api_key=os.environ["OPENAI_API_KEY"]
)
user = OpenAIChatAgent(
    name="user",
    system_prompt="You are a curious user.",
    model="gpt-4o",
    api_key=os.environ["OPENAI_API_KEY"]
)

# Create GroupChatManager with agents
manager = GroupChatManager(agents=[user, assistant])

# Run the group chat
conversation = manager.run(
    initial_messages={"user": "Hello, assistant! How do you work?"}
)

# Print conversation history
for msg in conversation.messages:
    print(f"{msg.sender}: {msg.text}")
output
user: Hello, assistant! How do you work?
assistant: Hello! I process your messages and respond intelligently using the GPT-4o model.

Common variations

  • Use different models like gpt-4o-mini for faster, cheaper responses.
  • Run GroupChatManager asynchronously with await manager.arun() in async contexts.
  • Customize agent system prompts to define distinct personalities or roles.
python
import asyncio

async def async_run():
    manager_async = GroupChatManager(agents=[user, assistant])
    conversation_async = await manager_async.arun(
        initial_messages={"user": "Hi asynchronously!"}
    )
    for msg in conversation_async.messages:
        print(f"{msg.sender}: {msg.text}")

asyncio.run(async_run())
output
user: Hi asynchronously!
assistant: Hello! Running asynchronously allows non-blocking multi-agent chats.

Troubleshooting

  • If you see authentication errors, verify your OPENAI_API_KEY environment variable is set correctly.
  • If messages do not flow, ensure agents have unique names and valid system prompts.
  • For rate limits, consider using smaller models or adding retry logic.

Key Takeaways

  • Initialize GroupChatManager with multiple agents to automate multi-agent conversations.
  • Use run() for synchronous or arun() for asynchronous execution.
  • Customize agent roles and system prompts to control conversation behavior.
  • Always set your OpenAI API key in os.environ["OPENAI_API_KEY"].
  • Handle common errors by checking API keys, agent names, and model usage.
Verified 2026-04 · gpt-4o, gpt-4o-mini
Verify ↗