How to intermediate · 3 min read

How to add multiple tools to OpenAI assistant

Quick answer
To add multiple tools to an OpenAI assistant, create a modular system where each tool is a callable function or class, then route user intents to the appropriate tool before or after calling client.chat.completions.create. Use Python to manage tool invocation and aggregate responses for a seamless assistant experience.

PREREQUISITES

  • Python 3.8+
  • OpenAI API key (free tier works)
  • pip install openai>=1.0

Setup

Install the latest OpenAI Python SDK and set your API key as an environment variable.

  • Run pip install openai>=1.0
  • Set environment variable OPENAI_API_KEY with your API key
bash
pip install openai>=1.0

Step by step

Define multiple tools as Python functions, then create a dispatcher that routes user requests to the correct tool or to the OpenAI chat completion. Combine tool outputs for the final assistant response.

python
import os
from openai import OpenAI

client = OpenAI(api_key=os.environ["OPENAI_API_KEY"])

# Define multiple tools

def calculator_tool(query: str) -> str:
    # Simple eval for demo; replace with safe parser in production
    try:
        result = str(eval(query, {"__builtins__": {}}))
        return f"Calculator result: {result}"
    except Exception as e:
        return f"Calculator error: {e}"


def weather_tool(location: str) -> str:
    # Dummy static response; replace with real API call
    return f"Weather in {location}: Sunny, 75°F"


def dispatch_tool(user_input: str) -> str:
    # Basic routing logic
    if user_input.lower().startswith("calculate"):
        expr = user_input[len("calculate"):].strip()
        return calculator_tool(expr)
    elif user_input.lower().startswith("weather"):
        loc = user_input[len("weather"):].strip()
        return weather_tool(loc)
    else:
        # Default to OpenAI chat completion
        response = client.chat.completions.create(
            model="gpt-4o-mini",
            messages=[{"role": "user", "content": user_input}]
        )
        return response.choices[0].message.content


# Example usage
if __name__ == "__main__":
    inputs = [
        "Calculate 2 + 2 * 3",
        "Weather New York",
        "Tell me a joke"
    ]

    for inp in inputs:
        output = dispatch_tool(inp)
        print(f"Input: {inp}\nOutput: {output}\n")
output
Input: Calculate 2 + 2 * 3
Output: Calculator result: 8

Input: Weather New York
Output: Weather in New York: Sunny, 75°F

Input: Tell me a joke
Output: [OpenAI-generated joke text here]

Common variations

You can extend this pattern by:

  • Adding async support with asyncio and await for API calls
  • Using different OpenAI models like gpt-4.1 or gpt-4o-mini
  • Integrating third-party APIs as tools with HTTP requests
  • Implementing a more advanced intent classification model to route requests
python
import asyncio
import os
from openai import OpenAI

client = OpenAI(api_key=os.environ["OPENAI_API_KEY"])

async def async_calculator_tool(query: str) -> str:
    # Simulate async operation
    await asyncio.sleep(0.1)
    try:
        result = str(eval(query, {"__builtins__": {}}))
        return f"Calculator result: {result}"
    except Exception as e:
        return f"Calculator error: {e}"

async def async_dispatch_tool(user_input: str) -> str:
    if user_input.lower().startswith("calculate"):
        expr = user_input[len("calculate"):].strip()
        return await async_calculator_tool(expr)
    else:
        response = await client.chat.completions.acreate(
            model="gpt-4.1",
            messages=[{"role": "user", "content": user_input}]
        )
        return response.choices[0].message.content

# Run async example
async def main():
    output = await async_dispatch_tool("Calculate 10 / 2")
    print(output)

if __name__ == "__main__":
    asyncio.run(main())
output
Calculator result: 5.0

Troubleshooting

  • If you get authentication errors, verify your OPENAI_API_KEY environment variable is set correctly.
  • For unexpected tool routing, improve your input parsing or use a dedicated intent classification model.
  • If the OpenAI API call fails, check your network connection and API usage limits.

Key Takeaways

  • Modularize each tool as a separate function or class for clean integration.
  • Use a dispatcher function to route user inputs to the correct tool or OpenAI chat completion.
  • Leverage async calls for better performance when integrating multiple APIs.
  • Always secure your API key via environment variables to avoid leaks.
  • Test each tool independently before combining for a robust assistant.
Verified 2026-04 · gpt-4o-mini, gpt-4.1
Verify ↗