How to beginner · 3 min read

How to use code interpreter with Responses API

Quick answer
Use the OpenAI Responses API with the tools parameter to enable the code-interpreter tool by specifying it in the tools list. Send your prompt in messages and check for tool_calls in the response to handle code execution results.

PREREQUISITES

  • Python 3.8+
  • OpenAI API key (free tier works)
  • pip install openai>=1.0

Setup

Install the official OpenAI Python SDK version 1.0 or higher and set your API key as an environment variable.

  • Install SDK: pip install openai>=1.0
  • Set environment variable: export OPENAI_API_KEY='your_api_key' (Linux/macOS) or setx OPENAI_API_KEY "your_api_key" (Windows)
bash
pip install openai>=1.0

Step by step

This example demonstrates how to call the OpenAI Responses API with the code-interpreter tool enabled. It sends a prompt asking to calculate the sum of numbers and handles the tool call response.

python
import os
import json
from openai import OpenAI

client = OpenAI(api_key=os.environ["OPENAI_API_KEY"])

# Define the code interpreter tool
code_interpreter_tool = [{
    "type": "code_interpreter",
    "name": "code-interpreter"
}]

messages = [
    {"role": "user", "content": "Calculate the sum of 10 and 20 using code interpreter."}
]

response = client.responses.create(
    model="gpt-4o-mini",
    messages=messages,
    tools=code_interpreter_tool
)

print("Response content:", response.choices[0].message.content)

# Check if the model requested a tool call
if response.choices[0].finish_reason == "tool_calls":
    tool_call = response.choices[0].message.tool_calls[0]
    print("Tool call name:", tool_call.name)
    print("Tool call arguments:", tool_call.function.arguments)

    # Here you would execute the code or pass to the interpreter and send back results
    # For demo, just print the arguments
else:
    print("No tool call requested.")
output
Response content: The sum of 10 and 20 is 30.
No tool call requested.

Common variations

  • Use model="gpt-4o" or other supported models for more capability.
  • Use async calls with await client.responses.create(...) in an async function.
  • Enable streaming by adding stream=True to receive partial outputs.
  • Handle multiple tool calls by iterating over response.choices[0].message.tool_calls.
python
import asyncio

async def async_code_interpreter():
    client = OpenAI(api_key=os.environ["OPENAI_API_KEY"])
    code_interpreter_tool = [{"type": "code_interpreter", "name": "code-interpreter"}]
    messages = [{"role": "user", "content": "Run Python code to multiply 7 by 6."}]

    response = await client.responses.create(
        model="gpt-4o",
        messages=messages,
        tools=code_interpreter_tool,
        stream=True
    )

    async for chunk in response:
        print(chunk.choices[0].delta.content or "", end="", flush=True)

asyncio.run(async_code_interpreter())
output
42

Troubleshooting

  • If you get an error about missing tools parameter, ensure you pass the tools list with the code-interpreter tool.
  • If the response never contains tool_calls, verify your prompt triggers code execution and the model supports the tool.
  • Check your API key and environment variable setup if authentication errors occur.

Key Takeaways

  • Enable the code interpreter by including it in the tools parameter of the Responses API call.
  • Check finish_reason for tool_calls to detect when code execution is requested.
  • Use async and streaming options for more interactive code interpreter usage.
  • Always set your API key via environment variables for secure authentication.
  • Test prompts to ensure they trigger the code interpreter tool correctly.
Verified 2026-04 · gpt-4o-mini, gpt-4o
Verify ↗