Multi-step function calling patterns
Quick answer
Use the OpenAI Python SDK's
tools parameter to define functions and detect tool_calls in responses for multi-step function calling. Chain calls by parsing tool_calls arguments, invoking the corresponding functions, and feeding results back into subsequent chat.completions.create requests.PREREQUISITES
Python 3.8+OpenAI API key (free tier works)pip install openai>=1.0
Setup
Install the official OpenAI Python SDK and set your API key as an environment variable.
- Install SDK:
pip install openai - Set environment variable:
export OPENAI_API_KEY='your_api_key'(Linux/macOS) orsetx OPENAI_API_KEY "your_api_key"(Windows)
pip install openai output
Collecting openai Downloading openai-1.x.x-py3-none-any.whl (xx kB) Installing collected packages: openai Successfully installed openai-1.x.x
Step by step
This example demonstrates a multi-step function calling pattern using the OpenAI Python SDK. It defines a get_weather function tool, calls the model to request weather, detects the tool call, executes the function, and sends the result back for a final answer.
import os
import json
from openai import OpenAI
client = OpenAI(api_key=os.environ["OPENAI_API_KEY"])
def get_weather(location: str) -> str:
# Dummy implementation for demo
return f"The weather in {location} is sunny, 75°F."
# Define the function tool
tools = [{
"type": "function",
"function": {
"name": "get_weather",
"description": "Get weather for a location",
"parameters": {
"type": "object",
"properties": {
"location": {"type": "string"}
},
"required": ["location"]
}
}
}]
# Step 1: Ask model to get weather
messages = [{"role": "user", "content": "What's the weather in New York City?"}]
response = client.chat.completions.create(
model="gpt-4o-mini",
messages=messages,
tools=tools
)
# Check if model wants to call a tool
if response.choices[0].finish_reason == "tool_calls":
tool_call = response.choices[0].message.tool_calls[0]
function_name = tool_call.function.name
arguments = json.loads(tool_call.function.arguments)
# Execute the function
if function_name == "get_weather":
weather_result = get_weather(arguments["location"])
# Step 2: Send function result back to model
followup_messages = messages + [
response.choices[0].message,
{"role": "function", "name": function_name, "content": weather_result}
]
final_response = client.chat.completions.create(
model="gpt-4o-mini",
messages=followup_messages
)
print("Final answer:", final_response.choices[0].message.content)
else:
print("Model response:", response.choices[0].message.content) output
Final answer: The weather in New York City is sunny, 75°F.
Common variations
You can adapt multi-step function calling for async usage, streaming, or different models.
- Async calls: Use
asyncandawaitwith the OpenAI SDK's async methods. - Streaming: Stream partial responses by setting
stream=Trueand iterating over chunks. - Different models: Use any model supporting
toolsparameter, e.g.,gpt-4oorgpt-4o-mini.
import asyncio
import os
import json
from openai import OpenAI
client = OpenAI(api_key=os.environ["OPENAI_API_KEY"])
def get_weather(location: str) -> str:
return f"The weather in {location} is sunny, 75°F."
async def main():
tools = [{
"type": "function",
"function": {
"name": "get_weather",
"description": "Get weather for a location",
"parameters": {
"type": "object",
"properties": {"location": {"type": "string"}},
"required": ["location"]
}
}
}]
messages = [{"role": "user", "content": "What's the weather in Boston?"}]
response = await client.chat.completions.create(
model="gpt-4o-mini",
messages=messages,
tools=tools
)
if response.choices[0].finish_reason == "tool_calls":
tool_call = response.choices[0].message.tool_calls[0]
args = json.loads(tool_call.function.arguments)
if tool_call.function.name == "get_weather":
weather = get_weather(args["location"])
followup = messages + [response.choices[0].message, {"role": "function", "name": "get_weather", "content": weather}]
final_resp = await client.chat.completions.create(model="gpt-4o-mini", messages=followup)
print("Final answer:", final_resp.choices[0].message.content)
else:
print("Model response:", response.choices[0].message.content)
asyncio.run(main()) output
Final answer: The weather in Boston is sunny, 75°F.
Troubleshooting
- If
finish_reasonis nottool_calls, the model did not request a function call; check yourtoolsdefinition and prompt. - If you get JSON parsing errors on
function.arguments, log the raw string to debug malformed JSON. - Ensure your environment variable
OPENAI_API_KEYis set correctly to avoid authentication errors. - Use the latest OpenAI SDK (v1+) to avoid deprecated method errors.
Key Takeaways
- Use the
toolsparameter to define callable functions for the model. - Detect
tool_callsin the response to trigger function execution. - Chain calls by sending function results back as
functionrole messages. - Support async and streaming by using the SDK's async methods and
stream=True. - Always parse
function.argumentsJSON carefully to avoid errors.