What is function calling in LLMs
LLMs is a capability that allows language models to generate structured requests to invoke external functions or APIs during a conversation. This enables seamless integration of real-time data, computations, or actions beyond text generation.How it works
Function calling in LLMs works by letting the model output a structured JSON or similar format that specifies which external function to call and with what parameters. Think of it like a smart assistant that not only talks but also knows when to press buttons or run commands on your behalf. The model predicts the function name and arguments based on the conversation context, then your application executes the function and returns the result back to the model for further processing.
This mechanism bridges natural language understanding with programmatic actions, enabling dynamic, interactive workflows.
Concrete example
Here is a Python example using the OpenAI SDK to demonstrate function calling with gpt-4o. The model is asked to get the current weather, and it returns a function call with parameters. Your code then simulates executing that function and feeds the result back.
import os
from openai import OpenAI
client = OpenAI(api_key=os.environ["OPENAI_API_KEY"])
# Define the function the model can call
functions = [
{
"name": "get_current_weather",
"description": "Get the current weather in a given location",
"parameters": {
"type": "object",
"properties": {
"location": {"type": "string", "description": "City name"},
"unit": {"type": "string", "enum": ["celsius", "fahrenheit"]}
},
"required": ["location"]
}
}
]
# User prompt
messages = [
{"role": "user", "content": "What's the weather like in Boston?"}
]
response = client.chat.completions.create(
model="gpt-4o",
messages=messages,
functions=functions,
function_call="auto" # Let model decide to call function
)
message = response.choices[0].message
if message.get("function_call"):
# Extract function call details
func_name = message["function_call"]["name"]
func_args = message["function_call"]["arguments"]
# Simulate function execution
# In real app, call your weather API here
weather_result = {
"location": "Boston",
"temperature": 68,
"unit": "fahrenheit",
"condition": "Sunny"
}
# Send function response back to model
followup = client.chat.completions.create(
model="gpt-4o",
messages=[
*messages,
message,
{
"role": "function",
"name": func_name,
"content": str(weather_result)
}
]
)
print(followup.choices[0].message.content)
else:
print(message.content) The current weather in Boston is 68 degrees fahrenheit and sunny.
When to use it
Use function calling when you want your LLM to interact with real-world data, perform calculations, or trigger actions that require up-to-date or external information. Examples include fetching weather, booking tickets, querying databases, or controlling IoT devices.
Do not use function calling if your task is purely text generation or static knowledge retrieval, as it adds complexity and requires backend integration.
Key terms
| Term | Definition |
|---|---|
| Function calling | The ability of an LLM to generate structured requests to invoke external functions or APIs. |
| Function schema | A JSON schema describing the function name, parameters, and descriptions the model can call. |
| Function call response | The output from the external function executed based on the model's request. |
| Function_call parameter | A special field in the LLM response indicating the function name and arguments to call. |
Key Takeaways
- Function calling lets LLMs trigger external functions dynamically during conversations.
- It requires defining function schemas so the model knows what it can call and with which parameters.
- Use function calling to integrate real-time data or actions beyond text generation.
- Your application must handle executing the function and returning results to the model.
- Function calling improves LLM interactivity and automation in practical applications.