How to validate LLM output with Guardrails AI
Quick answer
Use
Guardrails to define output schemas and constraints that validate LLM responses automatically. Integrate guardrails Python SDK with your LLM calls to enforce structured, safe outputs.PREREQUISITES
Python 3.8+OpenAI API key (free tier works)pip install guardrails openai
Setup
Install the guardrails Python package and set your OpenAI API key as an environment variable. Guardrails works with OpenAI and other LLMs to validate outputs against defined schemas.
pip install guardrails openai Step by step
Define a guardrails YAML schema to specify the expected output format and constraints. Then create a Guard instance in Python, wrap your LLM call, and validate the output automatically.
import os
from openai import OpenAI
from guardrails import Guard
# Define a simple guardrail schema as a string
schema = '''
version: 1
models:
- name: gpt-4o
rails:
- name: user_info
type: object
fields:
- name: name
type: string
description: User's full name
- name: age
type: integer
description: User's age
constraints:
- min: 0
- max: 120
'''
# Initialize OpenAI client
client = OpenAI(api_key=os.environ["OPENAI_API_KEY"])
# Create Guard instance with the schema
guard = Guard.from_yaml(schema)
# Define prompt
prompt = "Extract user info from: John Doe is 29 years old."
# Call LLM with guard validation
response = guard.generate(
client=client,
model="gpt-4o",
prompt=prompt
)
# Access validated output
print("Validated output:", response.data) output
Validated output: {'name': 'John Doe', 'age': 29} Common variations
- Use different LLM providers by passing their client to
guard.generate(). - Define more complex schemas with nested objects, enums, or lists in the YAML.
- Use async calls with
await guard.agenerate(...)if your LLM client supports async.
import asyncio
async def async_example():
response = await guard.agenerate(
client=client,
model="gpt-4o",
prompt="Extract user info from: Alice is 35 years old."
)
print("Async validated output:", response.data)
asyncio.run(async_example()) output
Async validated output: {'name': 'Alice', 'age': 35} Troubleshooting
- If validation fails with
GuardrailsValidationError, check your schema constraints and LLM output format. - Ensure your prompt clearly instructs the LLM to produce output matching the schema.
- Use
guard.debug=Trueto get detailed logs for debugging.
guard = Guard.from_yaml(schema, debug=True) Key Takeaways
- Define explicit output schemas with Guardrails YAML to enforce structured LLM responses.
- Integrate Guardrails with your LLM client to automatically validate and parse outputs.
- Use debug mode and clear prompts to troubleshoot validation errors effectively.