How to Beginner to Intermediate · 3 min read

How to use Pydantic AI with OpenAI

Quick answer
Use pydantic-ai by creating a Pydantic model for your expected response and then call client.chat.completions.create from the openai SDK with response_model set to that model. This enables typed, validated AI outputs from OpenAI models like gpt-4o-mini.

PREREQUISITES

  • Python 3.8+
  • OpenAI API key (free tier works)
  • pip install openai>=1.0 pydantic-ai pydantic

Setup

Install the required packages and set your OpenAI API key as an environment variable.

  • Install packages: openai, pydantic-ai, and pydantic.
  • Set OPENAI_API_KEY in your environment.
bash
pip install openai>=1.0 pydantic-ai pydantic
output
Collecting openai
Collecting pydantic-ai
Collecting pydantic
Successfully installed openai pydantic-ai pydantic-2.x.x

Step by step

Define a Pydantic model for the AI response, initialize the OpenAI client, and call the chat completion with response_model. The AI output is parsed and validated automatically.

python
import os
from openai import OpenAI
import instructor
from pydantic import BaseModel

# Define your Pydantic model for structured output
class User(BaseModel):
    name: str
    age: int

# Initialize OpenAI client
client = OpenAI(api_key=os.environ["OPENAI_API_KEY"])

# Wrap OpenAI client with pydantic-ai
pydantic_client = instructor.from_openai(client)

# Prepare messages
messages = [{"role": "user", "content": "Extract: John is 30 years old"}]

# Call chat completion with response_model
user = pydantic_client.chat.completions.create(
    model="gpt-4o-mini",
    response_model=User,
    messages=messages
)

print(f"Name: {user.name}, Age: {user.age}")
output
Name: John, Age: 30

Common variations

You can use async calls with await if your environment supports it. Also, you can switch to Anthropic models by using instructor.from_anthropic with the anthropic SDK client. Change the model parameter to use different OpenAI models like gpt-4o or gpt-4o-mini.

python
import asyncio

async def async_example():
    user = await pydantic_client.chat.completions.acreate(
        model="gpt-4o-mini",
        response_model=User,
        messages=[{"role": "user", "content": "Extract: Alice is 25 years old"}]
    )
    print(f"Async Name: {user.name}, Age: {user.age}")

asyncio.run(async_example())
output
Async Name: Alice, Age: 25

Troubleshooting

  • If you get validation errors, ensure your Pydantic model matches the expected AI output format.
  • If the API key is missing or invalid, set OPENAI_API_KEY correctly in your environment.
  • Use the latest openai and pydantic-ai packages to avoid compatibility issues.

Key Takeaways

  • Use pydantic-ai to parse and validate AI responses with Pydantic models.
  • Wrap the OpenAI client with instructor.from_openai() for seamless integration.
  • Always set response_model in chat.completions.create to get typed outputs.
  • Supports both synchronous and asynchronous usage patterns.
  • Keep your Pydantic model aligned with the expected AI output structure to avoid validation errors.
Verified 2026-04 · gpt-4o-mini
Verify ↗