How to use function calling in Gemini API
Quick answer
Use the
OpenAI Python SDK to call the Gemini API with the functions parameter in chat.completions.create. Define your functions schema and handle the function_call response to execute or respond accordingly.PREREQUISITES
Python 3.8+OpenAI API key (free tier works)pip install openai>=1.0
Setup
Install the official OpenAI Python SDK and set your API key as an environment variable.
pip install openai>=1.0 Step by step
Define your functions schema and call the Gemini model with functions and function_call parameters. Handle the response to detect if the model wants to call a function and process accordingly.
import os
from openai import OpenAI
client = OpenAI(api_key=os.environ["OPENAI_API_KEY"])
functions = [
{
"name": "get_current_weather",
"description": "Get the current weather in a given location",
"parameters": {
"type": "object",
"properties": {
"location": {"type": "string", "description": "The city and state, e.g. San Francisco, CA"},
"unit": {"type": "string", "enum": ["celsius", "fahrenheit"]}
},
"required": ["location"]
}
}
]
messages = [
{"role": "user", "content": "What is the weather like in New York?"}
]
response = client.chat.completions.create(
model="gemini-1.5-pro",
messages=messages,
functions=functions,
function_call="auto" # or specify function name
)
message = response.choices[0].message
if message.get("function_call"):
function_name = message["function_call"]["name"]
function_args = message["function_call"]["arguments"]
print(f"Function to call: {function_name}")
print(f"Arguments: {function_args}")
# Here you would implement the actual function call and return the result
else:
print(message.content) output
Function to call: get_current_weather
Arguments: {"location": "New York", "unit": "fahrenheit"} Common variations
- Use
function_call="none"to disable function calling. - Use
function_call="auto"to let the model decide when to call functions. - Switch models like
gemini-2.0-flashfor faster responses. - Implement async calls with
asyncioandawaitin Python.
import asyncio
import os
from openai import OpenAI
async def main():
client = OpenAI(api_key=os.environ["OPENAI_API_KEY"])
functions = [
{
"name": "get_current_weather",
"description": "Get the current weather in a given location",
"parameters": {
"type": "object",
"properties": {
"location": {"type": "string"},
"unit": {"type": "string", "enum": ["celsius", "fahrenheit"]}
},
"required": ["location"]
}
}
]
messages = [{"role": "user", "content": "Tell me the weather in Boston."}]
response = await client.chat.completions.acreate(
model="gemini-2.0-flash",
messages=messages,
functions=functions,
function_call="auto"
)
message = response.choices[0].message
print(message)
asyncio.run(main()) output
{'role': 'assistant', 'function_call': {'name': 'get_current_weather', 'arguments': '{"location": "Boston", "unit": "celsius"}'}} Troubleshooting
- If you get a
400 Bad Request, verify yourfunctionsJSON schema is valid. - If the model never calls functions, ensure
function_callis set to"auto"or the specific function name. - Check your API key environment variable is correctly set as
OPENAI_API_KEY.
Key Takeaways
- Use the
functionsparameter to define callable functions in Gemini API requests. - Set
function_callto "auto" to enable automatic function invocation by the model. - Parse the
function_callfield in the response to handle function execution in your app.