How to Intermediate · 3 min read

Guardrails logging and monitoring

Quick answer
Use the guardrails Python SDK to define and enforce guardrails with built-in logging and monitoring hooks. Integrate with AI APIs like OpenAI to capture guardrail violations and usage metrics programmatically for observability and debugging.

PREREQUISITES

  • Python 3.8+
  • OpenAI API key (free tier works)
  • pip install guardrails openai

Setup

Install the guardrails and openai Python packages and set your OpenAI API key as an environment variable.

  • Run pip install guardrails openai
  • Set environment variable OPENAI_API_KEY with your API key
bash
pip install guardrails openai

Step by step

Define a guardrail YAML schema to enforce output constraints and enable logging. Use the guardrails Python SDK to load the guardrail, wrap your AI call, and monitor violations.

python
import os
from openai import OpenAI
from guardrails import Guard

# Load guardrail schema from YAML string or file
guard_yaml = '''
version: 1

models:
  - name: gpt-4o

rails:
  - name: sentiment_check
    type: output
    prompt: |
      Detect if the output contains negative sentiment.
    constraints:
      - name: no_negative_sentiment
        type: regex
        pattern: '^(?!.*\b(hate|bad|terrible)\b).*$'
        message: 'Negative sentiment detected in output.'
'''

# Initialize guard
guard = Guard.from_yaml(guard_yaml)

# Initialize OpenAI client
client = OpenAI(api_key=os.environ["OPENAI_API_KEY"])

# Define prompt
prompt = "Write a positive review about a new product."

# Run with guardrails
response = guard.invoke(
    client.chat.completions.create,
    model="gpt-4o",
    messages=[{"role": "user", "content": prompt}]
)

print("AI output:", response.choices[0].message.content)
print("Guardrail violations:", guard.violations)

# Example output:
# AI output: This product is fantastic and exceeded all my expectations!
# Guardrail violations: []
output
AI output: This product is fantastic and exceeded all my expectations!
Guardrail violations: []

Common variations

You can enable asynchronous calls, stream responses, or use different AI providers with guardrails. For example, use anthropic.Anthropic client or stream partial outputs while monitoring guardrail compliance.

python
import asyncio
import os
import anthropic
from guardrails import Guard

async def async_run():
    guard_yaml = '''
    version: 1
    models:
      - name: claude-3-5-sonnet-20241022
    rails:
      - name: profanity_check
        type: output
        prompt: |
          Detect if the output contains profanity.
        constraints:
          - name: no_profanity
            type: regex
            pattern: '^(?!.*\b(badword1|badword2)\b).*$'
            message: 'Profanity detected in output.'
    '''

    guard = Guard.from_yaml(guard_yaml)
    client = anthropic.Anthropic(api_key=os.environ["ANTHROPIC_API_KEY"])

    prompt = "Write a polite greeting message."

    # Async call with guard
    response = await guard.invoke_async(
        client.messages.create,
        model="claude-3-5-sonnet-20241022",
        system="You are a helpful assistant.",
        messages=[{"role": "user", "content": prompt}]
    )

    print("AI output:", response.completion)
    print("Guardrail violations:", guard.violations)

asyncio.run(async_run())
output
AI output: Hello! I hope you have a wonderful day.
Guardrail violations: []

Troubleshooting

  • If guardrail violations appear unexpectedly, review your regex patterns and constraints for correctness.
  • Ensure your environment variables for API keys are set correctly to avoid authentication errors.
  • Enable verbose logging in guardrails to trace guardrail evaluation steps.
python
import logging
logging.basicConfig(level=logging.DEBUG)

Key Takeaways

  • Use the guardrails SDK to enforce AI output constraints with automatic logging and monitoring.
  • Integrate guardrails with your AI client calls to capture violations and improve AI safety observability.
  • Support for async, streaming, and multiple AI providers makes guardrails flexible for production use.
Verified 2026-04 · gpt-4o, claude-3-5-sonnet-20241022
Verify ↗