How to create validators with Guardrails AI
Quick answer
Use
Guardrails AI to create validators by defining rules in a railspec YAML file or inline Python decorators that specify constraints on model outputs. These validators enforce output formats, types, and safety checks automatically during generation.PREREQUISITES
Python 3.8+pip install guardrails-aiOpenAI API key (free tier works)
Setup
Install the guardrails-ai package and set your OpenAI API key as an environment variable.
pip install guardrails-ai Step by step
Create a railspec YAML file defining your validators, then load it in Python to validate LLM outputs automatically.
import os
from guardrails import Guard
# Define a railspec YAML string with validators
rail_spec = """
- name: user_info
type: object
fields:
name:
type: string
description: User's full name
required: true
age:
type: integer
description: User's age
required: true
minimum: 0
"""
# Save railspec to a file
with open("user_info_rail.yaml", "w") as f:
f.write(rail_spec)
# Initialize Guard with the railspec file
guard = Guard.from_path("user_info_rail.yaml")
# Example LLM output to validate
llm_output = '{"name": "Alice", "age": 30}'
# Validate output
validated_output = guard.validate(llm_output)
print(validated_output) output
{"name": "Alice", "age": 30} Common variations
You can create validators inline using Python decorators or extend validation with custom functions. Guardrails supports async usage and integration with OpenAI or other LLMs.
from guardrails import guard
@guard(
{
"name": "string",
"age": {"type": "integer", "minimum": 0}
}
)
def validate_user_info(output: str):
# This function will automatically validate the output
return output
# Example usage
result = validate_user_info('{"name": "Bob", "age": 25}')
print(result) output
{"name": "Bob", "age": 25} Troubleshooting
- If validation fails with
ValidationError, check your railspec syntax and ensure the output matches the expected format. - Use
guard.validate()exceptions to debug invalid fields. - Ensure your LLM output is valid JSON or matches the schema type.
Key Takeaways
- Define validators in a railspec YAML or inline Python to enforce output constraints.
- Guardrails AI automatically validates and sanitizes LLM outputs to prevent errors.
- Use guard.validate() to check outputs and catch validation errors early.
- Validators support types, ranges, required fields, and custom logic.
- Guardrails integrates seamlessly with OpenAI and other LLMs for safe generation.