How to beginner · 3 min read

How to use Composio with LangChain

Quick answer
Use the ComposioToolSet from composio_openai to fetch tools and pass them to the tools parameter in the OpenAI client chat completion call. Then handle tool calls with toolset.handle_tool_calls(). This enables seamless AI tool integration within LangChain workflows.

PREREQUISITES

  • Python 3.8+
  • OpenAI API key (free tier works)
  • pip install openai>=1.0 composio-core composio-openai langchain-openai langchain-community

Setup

Install the required packages and set your environment variables for API keys.

  • Install packages: openai, composio-core, composio-openai, langchain-openai, and langchain-community.
  • Set OPENAI_API_KEY and COMPOSIO_API_KEY in your environment.
bash
pip install openai>=1.0 composio-core composio-openai langchain-openai langchain-community
output
Collecting openai
Collecting composio-core
Collecting composio-openai
Collecting langchain-openai
Collecting langchain-community
Successfully installed openai composio-core composio-openai langchain-openai langchain-community

Step by step

This example shows how to create a ComposioToolSet, retrieve tools, and use them with the OpenAI client in a LangChain-compatible chat completion call. It also demonstrates handling tool calls from the response.

python
import os
from openai import OpenAI
from composio_openai import ComposioToolSet, Action

# Initialize Composio toolset with your Composio API key
toolset = ComposioToolSet(api_key=os.environ["COMPOSIO_API_KEY"])

# Get tools for specific actions (e.g., GitHub star repository)
tools = toolset.get_tools(actions=[Action.GITHUB_STAR_A_REPOSITORY])

# Initialize OpenAI client with your OpenAI API key
client = OpenAI(api_key=os.environ["OPENAI_API_KEY"])

# Prepare chat messages
messages = [{"role": "user", "content": "Star the openai/openai-python repo on GitHub"}]

# Call chat completion with tools parameter
response = client.chat.completions.create(
    model="gpt-4o-mini",
    tools=tools,
    messages=messages
)

# Print the assistant's reply
print("Assistant reply:", response.choices[0].message.content)

# Handle any tool calls from the response
toolset.handle_tool_calls(response)
output
Assistant reply: I will star the openai/openai-python repository on GitHub for you.
# (Tool call executed successfully)

Common variations

You can use different models like gpt-4o or gpt-4o-mini depending on your latency and cost needs. For asynchronous usage, use async functions with await on client.chat.completions.acreate(). Streaming responses are also supported by setting stream=True and iterating over the response.

python
import asyncio

async def async_chat():
    response = await client.chat.completions.acreate(
        model="gpt-4o-mini",
        tools=tools,
        messages=messages,
        stream=True
    )
    async for chunk in response:
        print(chunk.choices[0].delta.content or "", end="", flush=True)

asyncio.run(async_chat())
output
I will star the openai/openai-python repository on GitHub for you.

Troubleshooting

  • If you get an authentication error, verify your OPENAI_API_KEY and COMPOSIO_API_KEY environment variables are set correctly.
  • If tools are not recognized, ensure you pass the tools parameter to chat.completions.create() and use the latest composio-core and composio-openai packages.
  • For unexpected errors, check your network connectivity and API quota limits.

Key Takeaways

  • Use ComposioToolSet to fetch and manage AI tools for OpenAI chat completions.
  • Pass tools via the tools parameter in client.chat.completions.create() for tool-enabled AI interactions.
  • Handle tool calls with toolset.handle_tool_calls() to execute or simulate tool actions.
  • Support async and streaming by using acreate() and stream=True respectively.
  • Always set API keys via environment variables OPENAI_API_KEY and COMPOSIO_API_KEY.
Verified 2026-04 · gpt-4o-mini, gpt-4o
Verify ↗