How to add safety rails with NeMo
Quick answer
Use
NeMo Guardrails to add safety rails by defining rules and constraints that filter or modify AI outputs. Integrate nemo_guardrails in your Python app to enforce safety policies on LLM responses.PREREQUISITES
Python 3.8+pip install nemo_guardrailsAn AI model client (e.g., OpenAI API key)Basic knowledge of Python async programming
Setup
Install the nemo_guardrails package and set your AI API key as an environment variable. NeMo Guardrails works with OpenAI or other LLM clients.
pip install nemo_guardrails Step by step
Create a guardrails config file defining rules for safe AI responses, then load and run the guardrails in your Python app to filter outputs.
import os
import asyncio
from nemo_guardrails import Guard
from openai import OpenAI
# Set your OpenAI API key in environment variable OPENAI_API_KEY
# Define a simple guardrails config inline (usually in a YAML file)
guardrails_config = {
"rules": [
{
"name": "no_hate_speech",
"type": "block",
"patterns": ["hate", "violence", "discrimination"]
}
]
}
async def main():
# Initialize OpenAI client
client = OpenAI(api_key=os.environ["OPENAI_API_KEY"])
# Initialize Guard with config and OpenAI client
guard = Guard.from_dict(guardrails_config, client=client)
# Input prompt
prompt = "Write a sentence about peace and kindness."
# Run guardrails to get safe response
response = await guard.generate(prompt)
print("Guardrails output:", response)
if __name__ == "__main__":
asyncio.run(main()) output
Guardrails output: Peace and kindness bring people together and create harmony.
Common variations
- Use YAML files for complex guardrails rules instead of inline dicts.
- Integrate with other LLM providers by passing their client to
Guard. - Run guardrails synchronously with
asyncio.run()or in async frameworks.
Troubleshooting
- If guardrails block all outputs, check your rule patterns for overblocking.
- Ensure your AI client API key is set correctly in
os.environ. - Use logging in
nemo_guardrailsto debug rule matching.
Key Takeaways
- Use
nemo_guardrailsto define and enforce safety rules on AI outputs. - Guardrails can block or modify unsafe content based on customizable patterns.
- Integrate guardrails with any LLM client by passing it to the
Guardclass. - Use async Python to run guardrails smoothly in your applications.