How to use OpenAI fine-tuning UI
Quick answer
Use the OpenAI fine-tuning UI by uploading your training data in JSONL format, then create and monitor fine-tuning jobs directly on the platform. You can also use the OpenAI Python SDK to upload files, start fine-tuning jobs, and query your custom fine-tuned models programmatically.
PREREQUISITES
Python 3.8+OpenAI API key (free tier works)pip install openai>=1.0
Setup
Install the official openai Python package and set your API key as an environment variable.
pip install openai>=1.0 output
Collecting openai Downloading openai-1.x.x-py3-none-any.whl (xx kB) Installing collected packages: openai Successfully installed openai-1.x.x
Step by step
This example shows how to upload a training file, create a fine-tuning job, monitor its status, and query the fine-tuned model using the OpenAI Python SDK.
import os
from openai import OpenAI
import time
client = OpenAI(api_key=os.environ["OPENAI_API_KEY"])
# Step 1: Upload training data (JSONL format)
with open("training_data.jsonl", "rb") as f:
training_file = client.files.create(file=f, purpose="fine-tune")
print(f"Uploaded file ID: {training_file.id}")
# Step 2: Create fine-tuning job
job = client.fine_tuning.jobs.create(training_file=training_file.id, model="gpt-4o-mini-2024-07-18")
print(f"Created fine-tuning job ID: {job.id}")
# Step 3: Poll job status until done
while True:
status = client.fine_tuning.jobs.retrieve(job.id)
print(f"Job status: {status.status}")
if status.status in ["succeeded", "failed"]:
break
time.sleep(10)
if status.status == "succeeded":
fine_tuned_model = status.fine_tuned_model
print(f"Fine-tuned model ready: {fine_tuned_model}")
# Step 4: Query the fine-tuned model
response = client.chat.completions.create(
model=fine_tuned_model,
messages=[{"role": "user", "content": "Hello, how are you?"}]
)
print("Response:", response.choices[0].message.content)
else:
print("Fine-tuning job failed.") output
Uploaded file ID: file-abc123xyz Created fine-tuning job ID: ftjob-xyz789abc Job status: running Job status: running Job status: succeeded Fine-tuned model ready: gpt-4o-mini-2024-07-18-ft-abc123 Response: Hello! I'm your fine-tuned assistant. How can I help you today?
Common variations
You can use asynchronous calls with asyncio for non-blocking status polling. Also, you can fine-tune different base models by changing the model parameter. The OpenAI UI also supports drag-and-drop file uploads and job monitoring without code.
import asyncio
import os
from openai import OpenAI
async def monitor_fine_tune(job_id: str):
client = OpenAI(api_key=os.environ["OPENAI_API_KEY"])
while True:
status = client.fine_tuning.jobs.retrieve(job_id)
print(f"Job status: {status.status}")
if status.status in ["succeeded", "failed"]:
return status
await asyncio.sleep(10)
async def main():
client = OpenAI(api_key=os.environ["OPENAI_API_KEY"])
with open("training_data.jsonl", "rb") as f:
training_file = client.files.create(file=f, purpose="fine-tune")
job = client.fine_tuning.jobs.create(training_file=training_file.id, model="gpt-4o")
print(f"Started job {job.id}")
status = await monitor_fine_tune(job.id)
if status.status == "succeeded":
print(f"Fine-tuned model: {status.fine_tuned_model}")
import asyncio
asyncio.run(main()) output
Started job ftjob-xyz789abc Job status: running Job status: running Job status: succeeded Fine-tuned model: gpt-4o-ft-xyz789abc
Troubleshooting
- If you see
Invalid file format, ensure your training data is in valid JSONL with{"messages": [...]}structure. - If the job
fails, check the error message in the job details on the UI or via API. - API rate limits can cause errors; retry with exponential backoff.
Key Takeaways
- Use the OpenAI fine-tuning UI to upload JSONL training data and manage jobs visually.
- Programmatically upload files and create fine-tuning jobs with the OpenAI Python SDK using
client.files.createandclient.fine_tuning.jobs.create. - Poll job status until completion before querying your custom fine-tuned model.
- Async polling and different base models are supported for flexible workflows.
- Validate your training data format to avoid common errors during fine-tuning.