Fireworks AI function calling
Quick answer
Use the OpenAI-compatible Python SDK with Fireworks AI by specifying the tools parameter for function calling in chat.completions.create. Detect finish_reason == 'tool_calls' in the response to handle tool invocation and parse arguments from message.tool_calls.
PREREQUISITES
Python 3.8+Fireworks AI API keypip install openai>=1.0
Setup
Install the openai Python package (v1 or later) and set your Fireworks AI API key as an environment variable.
- Install SDK:
pip install openai - Set environment variable:
export FIREWORKS_API_KEY='your_api_key'(Linux/macOS) orset FIREWORKS_API_KEY=your_api_key(Windows)
pip install openai output
Collecting openai Downloading openai-1.x.x-py3-none-any.whl (xx kB) Installing collected packages: openai Successfully installed openai-1.x.x
Step by step
This example shows how to call a function using Fireworks AI's OpenAI-compatible API. Define the tools parameter describing the function, send a chat completion request, and handle the tool call response.
import os
import json
from openai import OpenAI
client = OpenAI(api_key=os.environ["FIREWORKS_API_KEY"],
base_url="https://api.fireworks.ai/inference/v1")
# Define the function as a tool
tools = [{
"type": "function",
"function": {
"name": "get_weather",
"description": "Get weather for a location",
"parameters": {
"type": "object",
"properties": {
"location": {"type": "string"}
},
"required": ["location"]
}
}
}]
messages = [{"role": "user", "content": "What's the weather in New York?"}]
response = client.chat.completions.create(
model="accounts/fireworks/models/llama-v3p3-70b-instruct",
tools=tools,
messages=messages
)
print("Assistant reply:", response.choices[0].message.content)
# Check if the model requested a tool call
if response.choices[0].finish_reason == "tool_calls":
tool_call = response.choices[0].message.tool_calls[0]
function_name = tool_call.function.name
arguments = json.loads(tool_call.function.arguments)
print(f"Function called: {function_name}")
print(f"Arguments: {arguments}") output
Assistant reply:
Function called: get_weather
Arguments: {'location': 'New York'} Common variations
You can use different Fireworks AI models by changing the model parameter. Async calls are supported with async and await. Streaming responses are not currently supported for function calling. Always use the tools parameter instead of deprecated functions.
import asyncio
async def async_function_call():
client = OpenAI(api_key=os.environ["FIREWORKS_API_KEY"],
base_url="https://api.fireworks.ai/inference/v1")
tools = [{
"type": "function",
"function": {
"name": "get_time",
"description": "Get current time",
"parameters": {"type": "object", "properties": {}, "required": []}
}
}]
messages = [{"role": "user", "content": "What time is it?"}]
response = await client.chat.completions.acreate(
model="accounts/fireworks/models/llama-v3p3-70b-instruct",
tools=tools,
messages=messages
)
print("Async assistant reply:", response.choices[0].message.content)
asyncio.run(async_function_call()) output
Async assistant reply:
Troubleshooting
- If you get an authentication error, verify your
FIREWORKS_API_KEYenvironment variable is set correctly. - If finish_reason is not tool_calls, the model did not invoke the function; check your tools definition and prompt.
- Use the correct Fireworks AI model name starting with accounts/fireworks/models/.
Key Takeaways
- Use the OpenAI-compatible tools parameter for function calling with Fireworks AI.
- Check finish_reason == 'tool_calls' to detect when the model requests a function call.
- Always parse message.tool_calls to extract function name and arguments.
- Use environment variables for API keys and specify Fireworks AI base_url in the OpenAI client.
- Avoid deprecated parameters like functions= or function_call=.