How to Intermediate · 3 min read

How to use LiteLLM with AutoGen

Quick answer
Use LiteLLM as the model backend within AutoGen by configuring the LiteLLM client and passing it to AutoGen agents or chains. This enables lightweight, local or cloud-based LLM usage with AutoGen's orchestration for multi-agent workflows.

PREREQUISITES

  • Python 3.8+
  • pip install litellm autogen
  • Basic familiarity with Python async programming

Setup

Install the required packages litellm and autogen via pip. Set up any necessary environment variables if using cloud APIs with LiteLLM.

bash
pip install litellm autogen

Step by step

This example shows how to create a LiteLLM client and use it with AutoGen to run a simple multi-agent chat workflow.

python
from litellm import LiteLLM
from autogen import AutoAgent, AutoGen

# Initialize LiteLLM client (local or cloud config)
llm = LiteLLM(model_name="litellm-base")

# Define a simple AutoGen agent using LiteLLM
class EchoAgent(AutoAgent):
    def __init__(self, llm):
        super().__init__(llm=llm)

    async def respond(self, message: str) -> str:
        # Echo back the message with a prefix
        response = await self.llm.chat([{"role": "user", "content": message}])
        return f"Echo: {response['choices'][0]['message']['content']}"

async def main():
    agent = EchoAgent(llm)
    autogen = AutoGen(agents=[agent])

    # Run a conversation
    reply = await agent.respond("Hello from AutoGen with LiteLLM!")
    print(reply)

import asyncio
asyncio.run(main())
output
Echo: Hello from AutoGen with LiteLLM!

Common variations

  • Use different LiteLLM models by changing model_name.
  • Integrate multiple agents in AutoGen for complex workflows.
  • Use synchronous calls if preferred by wrapping async calls.

Troubleshooting

  • If you get connection errors, verify your LiteLLM configuration and API keys.
  • For async runtime errors, ensure you run inside an async event loop like asyncio.run().
  • Check model availability and update litellm package if you encounter unexpected behavior.

Key Takeaways

  • Use LiteLLM as the LLM backend to leverage AutoGen's multi-agent orchestration.
  • Initialize LiteLLM with the desired model and pass it to AutoGen agents.
  • Run AutoGen workflows asynchronously for best performance and compatibility.
Verified 2026-04 · litellm-base
Verify ↗