How to use function calling with OpenAI Assistants API
Quick answer
Use the OpenAI Assistants API by specifying the
functions parameter in your chat.completions.create call to define callable functions. The assistant can then respond with a function_call object, which your code can parse and execute accordingly.PREREQUISITES
Python 3.8+OpenAI API key (free tier works)pip install openai>=1.0
Setup
Install the official OpenAI Python SDK and set your API key as an environment variable.
- Install SDK:
pip install openai - Set environment variable:
export OPENAI_API_KEY='your_api_key'(Linux/macOS) orsetx OPENAI_API_KEY "your_api_key"(Windows)
pip install openai Step by step
This example demonstrates defining a function schema, sending it with the chat request, and handling the assistant's function call response.
import os
from openai import OpenAI
client = OpenAI(api_key=os.environ["OPENAI_API_KEY"])
# Define the function schema
functions = [
{
"name": "get_current_weather",
"description": "Get the current weather in a given location",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state, e.g. San Francisco, CA"
},
"unit": {
"type": "string",
"enum": ["celsius", "fahrenheit"]
}
},
"required": ["location"]
}
}
]
messages = [
{"role": "user", "content": "What's the weather like in Boston?"}
]
response = client.chat.completions.create(
model="gpt-4o-mini",
messages=messages,
functions=functions,
function_call="auto" # Let the model decide to call the function
)
message = response.choices[0].message
if message.get("function_call"):
function_name = message["function_call"]["name"]
arguments = message["function_call"]["arguments"]
print(f"Function to call: {function_name}")
print(f"Arguments: {arguments}")
# Here you would parse arguments and call your local function
else:
print(message.content) output
Function to call: get_current_weather
Arguments: {"location": "Boston"} Common variations
- Explicit function call: Set
function_call="get_current_weather"to force the model to call a specific function. - Async usage: Use async client methods if your environment supports async.
- Different models: Use
gpt-4o-miniorgpt-4odepending on latency and cost needs.
import asyncio
import os
from openai import OpenAI
async def main():
client = OpenAI(api_key=os.environ["OPENAI_API_KEY"])
response = await client.chat.completions.acreate(
model="gpt-4o-mini",
messages=[{"role": "user", "content": "Tell me the weather in New York."}],
functions=functions,
function_call="auto"
)
message = response.choices[0].message
print(message)
asyncio.run(main()) Troubleshooting
- If you receive an error about missing
functionsparameter, ensure you pass the function schema correctly as a list of dicts. - If the assistant never calls your function, try setting
function_call="auto"explicitly or check your function descriptions for clarity. - Check your API key environment variable if authentication errors occur.
Key Takeaways
- Use the
functionsparameter to define callable functions in your chat request. - Check the
function_callfield in the assistant's response to trigger local function execution. - Set
function_call="auto"to let the model decide when to call functions. - Use async client methods for non-blocking calls in async environments.
- Ensure your function schemas are clear and correctly formatted to improve function calling accuracy.