How to beginner · 3 min read

How to call external API with function calling

Quick answer
Use OpenAI's tools parameter to define functions your model can call, then detect tool_calls in the response to invoke external APIs. Pass the API results back to the model for further processing or final output.

PREREQUISITES

  • Python 3.8+
  • OpenAI API key (free tier works)
  • pip install openai>=1.0

Setup

Install the official openai Python SDK version 1.0 or higher and set your API key as an environment variable.

  • Install SDK: pip install openai
  • Set environment variable: export OPENAI_API_KEY='your_api_key' (Linux/macOS) or setx OPENAI_API_KEY "your_api_key" (Windows)
bash
pip install openai
output
Collecting openai
  Downloading openai-1.x.x-py3-none-any.whl (xx kB)
Installing collected packages: openai
Successfully installed openai-1.x.x

Step by step

Define the function schema in the tools parameter, call the chat completion, detect if the model requests a function call, invoke the external API accordingly, then send the API response back to the model for final completion.

python
import os
import json
from openai import OpenAI

client = OpenAI(api_key=os.environ["OPENAI_API_KEY"])

# Define the function schema for the external API
tools = [{
    "type": "function",
    "function": {
        "name": "get_weather",
        "description": "Get current weather for a location",
        "parameters": {
            "type": "object",
            "properties": {
                "location": {"type": "string", "description": "City name"}
            },
            "required": ["location"]
        }
    }
}]

# User prompt
messages = [{"role": "user", "content": "What's the weather in New York?"}]

# Call the chat completion with tools
response = client.chat.completions.create(
    model="gpt-4o-mini",
    messages=messages,
    tools=tools
)

choice = response.choices[0]

if choice.finish_reason == "tool_calls":
    # Extract function call details
    tool_call = choice.message.tool_calls[0]
    function_name = tool_call.function.name
    arguments = json.loads(tool_call.function.arguments)

    # Simulate external API call (replace with real API call)
    def get_weather(location):
        # Dummy response
        return f"The weather in {location} is sunny, 75°F."

    api_result = get_weather(arguments["location"])

    # Send API result back to model for final response
    followup_messages = messages + [
        choice.message,
        {"role": "function", "name": function_name, "content": api_result}
    ]

    final_response = client.chat.completions.create(
        model="gpt-4o-mini",
        messages=followup_messages
    )

    print("Final answer:", final_response.choices[0].message.content)
else:
    print("Answer:", choice.message.content)
output
Final answer: The weather in New York is sunny, 75°F.

Common variations

You can use async calls with asyncio and the OpenAI SDK's async methods. Different models like gpt-4o or gpt-4o-mini support function calling. For streaming, combine stream=True with function calling but handle partial outputs carefully.

python
import os
import json
import asyncio
from openai import OpenAI

async def main():
    client = OpenAI(api_key=os.environ["OPENAI_API_KEY"])

    tools = [{
        "type": "function",
        "function": {
            "name": "get_time",
            "description": "Get current time for a timezone",
            "parameters": {
                "type": "object",
                "properties": {
                    "timezone": {"type": "string"}
                },
                "required": ["timezone"]
            }
        }
    }]

    messages = [{"role": "user", "content": "What time is it in UTC?"}]

    response = await client.chat.completions.acreate(
        model="gpt-4o",
        messages=messages,
        tools=tools
    )

    choice = response.choices[0]

    if choice.finish_reason == "tool_calls":
        tool_call = choice.message.tool_calls[0]
        args = json.loads(tool_call.function.arguments)

        # Example external API call
        def get_time(timezone):
            import datetime
            from pytz import timezone as tz
            now = datetime.datetime.now(tz(timezone))
            return now.strftime("%Y-%m-%d %H:%M:%S")

        api_result = get_time(args["timezone"])

        followup_messages = messages + [
            choice.message,
            {"role": "function", "name": tool_call.function.name, "content": api_result}
        ]

        final_response = await client.chat.completions.acreate(
            model="gpt-4o",
            messages=followup_messages
        )

        print("Final answer:", final_response.choices[0].message.content)
    else:
        print("Answer:", choice.message.content)

asyncio.run(main())
output
Final answer: The current time in UTC is 2026-04-27 15:42:10.

Troubleshooting

  • If finish_reason is not tool_calls, the model did not request a function call; check your tools schema and prompt.
  • Ensure your tools parameter matches the function signature exactly, including required parameters.
  • For JSON parsing errors, validate the function.arguments string before loading.
  • If the external API call fails, handle exceptions gracefully and return fallback content to the model.

Key Takeaways

  • Use the tools parameter to define callable functions for the model.
  • Detect tool_calls in the response to trigger external API calls.
  • Send the API results back as function role messages for final completion.
  • Async and streaming calls are supported with the OpenAI SDK.
  • Validate JSON arguments and handle API errors to avoid runtime failures.
Verified 2026-04 · gpt-4o-mini, gpt-4o
Verify ↗