How to beginner · 3 min read

How to upload files for code interpreter

Quick answer
To upload files for the code interpreter in OpenAI, use the client.files.create() method from the openai Python SDK to upload your file, then reference the file ID in your chat completion request. This enables the assistant to access and process your uploaded files during the session.

PREREQUISITES

  • Python 3.8+
  • OpenAI API key (free tier works)
  • pip install openai>=1.0

Setup

Install the official openai Python SDK and set your API key as an environment variable.

  • Install SDK: pip install openai
  • Set environment variable: export OPENAI_API_KEY='your_api_key' (Linux/macOS) or setx OPENAI_API_KEY "your_api_key" (Windows)
bash
pip install openai

Step by step

Upload your file using client.files.create() and then call the code interpreter model referencing the uploaded file ID in the chat messages.

python
import os
from openai import OpenAI

client = OpenAI(api_key=os.environ["OPENAI_API_KEY"])

# Step 1: Upload the file
with open("example.py", "rb") as f:
    upload_response = client.files.create(
        file=f,
        purpose="code-interpreter"
    )
file_id = upload_response.id
print(f"Uploaded file ID: {file_id}")

# Step 2: Use the uploaded file in a chat completion
messages = [
    {"role": "user", "content": f"Please analyze the code in this file: file_id:{file_id}"}
]

response = client.chat.completions.create(
    model="gpt-4o-mini",
    messages=messages
)

print("Assistant response:")
print(response.choices[0].message.content)
output
Uploaded file ID: file-abc123xyz
Assistant response:
The code you uploaded defines a function that calculates factorials recursively. Here's a summary...

Common variations

You can upload different file types supported by the code interpreter, such as .py, .csv, or .txt. For asynchronous usage, use Python's asyncio with the OpenAI SDK's async methods. You can also switch models by changing the model parameter.

python
import asyncio
import os
from openai import OpenAI

async def async_upload_and_query():
    client = OpenAI(api_key=os.environ["OPENAI_API_KEY"])

    # Async file upload
    with open("data.csv", "rb") as f:
        upload_response = await client.files.acreate(
            file=f,
            purpose="code-interpreter"
        )
    file_id = upload_response.id
    print(f"Uploaded file ID: {file_id}")

    # Async chat completion
    messages = [
        {"role": "user", "content": f"Analyze the data in file_id:{file_id}"}
    ]
    response = await client.chat.completions.acreate(
        model="gpt-4o-mini",
        messages=messages
    )
    print("Assistant response:")
    print(response.choices[0].message.content)

asyncio.run(async_upload_and_query())
output
Uploaded file ID: file-xyz789abc
Assistant response:
The CSV data contains sales figures for Q1. Here's the summary and insights...

Troubleshooting

  • If you get a 403 Forbidden error, verify your API key and permissions.
  • If the file upload fails, check the file size and format; the code interpreter supports common code and data files up to the size limit.
  • Ensure you specify purpose="code-interpreter" when uploading files for this use case.

Key Takeaways

  • Use client.files.create() with purpose="code-interpreter" to upload files for the code interpreter.
  • Reference the uploaded file by its file_id in your chat messages to enable file-based code analysis.
  • The OpenAI Python SDK supports both synchronous and asynchronous file uploads and chat completions.
  • Check file size, format, and API key permissions if uploads or requests fail.
  • Switch models or file types easily by adjusting parameters in the SDK calls.
Verified 2026-04 · gpt-4o-mini
Verify ↗