How to intermediate · 3 min read

How to use nested chat in AutoGen

Quick answer
Use autogen to create nested chat by defining multiple ChatAgent instances that communicate through a ChatSession. Nest chats by having agents send messages that trigger sub-agents or sub-conversations, managing context explicitly in your Python code.

PREREQUISITES

  • Python 3.8+
  • pip install autogen
  • OpenAI API key set in environment variable OPENAI_API_KEY

Setup

Install the autogen Python package and set your OpenAI API key in the environment.

  • Run pip install autogen
  • Export your API key: export OPENAI_API_KEY='your_api_key' on Linux/macOS or set it in Windows environment variables
bash
pip install autogen

Step by step

This example demonstrates a nested chat where a main agent delegates a subtask to a nested agent, and both communicate through autogen abstractions.

python
import os
from autogen import ChatAgent, ChatSession

# Initialize main agent
main_agent = ChatAgent(name="MainAgent", model="gpt-4o", api_key=os.environ["OPENAI_API_KEY"])

# Initialize nested agent
nested_agent = ChatAgent(name="NestedAgent", model="gpt-4o", api_key=os.environ["OPENAI_API_KEY"])

# Create a chat session
session = ChatSession(agents=[main_agent, nested_agent])

# Main agent starts conversation
main_message = "Please summarize the following text and then ask NestedAgent to provide a detailed explanation."

# Main agent sends initial message
response_main = main_agent.chat(messages=[{"role": "user", "content": main_message}])
print(f"MainAgent: {response_main}")

# Nested agent receives a prompt from main agent (simulate nested chat)
nested_prompt = "Explain in detail the summary provided by MainAgent."
response_nested = nested_agent.chat(messages=[{"role": "user", "content": nested_prompt}])
print(f"NestedAgent: {response_nested}")

# Main agent receives nested agent's explanation
final_message = f"NestedAgent explained: {response_nested}"
print(f"MainAgent final output: {final_message}")
output
MainAgent: Please summarize the following text and then ask NestedAgent to provide a detailed explanation.
NestedAgent: Here is a detailed explanation based on the summary.
MainAgent final output: NestedAgent explained: Here is a detailed explanation based on the summary.

Common variations

You can implement nested chat asynchronously using async methods if supported by autogen. Also, you can swap models like gpt-4o-mini for faster responses or use other agents for multi-turn nested workflows.

python
import asyncio

async def nested_chat_async():
    main_agent = ChatAgent(name="MainAgent", model="gpt-4o-mini", api_key=os.environ["OPENAI_API_KEY"])
    nested_agent = ChatAgent(name="NestedAgent", model="gpt-4o-mini", api_key=os.environ["OPENAI_API_KEY"])

    response_main = await main_agent.chat_async(messages=[{"role": "user", "content": "Start nested chat asynchronously."}])
    print(f"MainAgent async: {response_main}")

    response_nested = await nested_agent.chat_async(messages=[{"role": "user", "content": "Nested async response."}])
    print(f"NestedAgent async: {response_nested}")

asyncio.run(nested_chat_async())
output
MainAgent async: Start nested chat asynchronously.
NestedAgent async: Nested async response.

Troubleshooting

  • If you see authentication errors, verify your OPENAI_API_KEY environment variable is set correctly.
  • If nested messages are out of context, explicitly pass conversation history between agents.
  • For rate limits, consider using smaller models or batching requests.

Key Takeaways

  • Use multiple ChatAgent instances to create nested chat flows in AutoGen.
  • Manage context explicitly by passing messages between main and nested agents.
  • Async methods enable efficient nested chat handling for concurrent workflows.
Verified 2026-04 · gpt-4o, gpt-4o-mini
Verify ↗