How to Intermediate · 3 min read

Azure OpenAI responsible AI controls

Quick answer
Azure OpenAI provides built-in responsible AI controls including content filtering, usage monitoring, and policy enforcement via Azure Policy and Microsoft Purview. Use the Azure OpenAI Studio and Azure portal to configure content moderation, audit logs, and compliance settings to ensure ethical and secure AI deployments.

PREREQUISITES

  • Python 3.8+
  • Azure subscription with Azure OpenAI resource
  • Azure CLI installed and logged in
  • pip install azure-identity azure-ai-openai

Setup Azure OpenAI environment

Start by creating an Azure OpenAI resource in the Azure portal and installing the necessary Python SDKs. Set environment variables for authentication using Azure Active Directory credentials or API keys.

bash
az login
az group create --name myResourceGroup --location eastus
az openai account create --name myOpenAIResource --resource-group myResourceGroup

pip install azure-identity azure-ai-openai

export AZURE_OPENAI_ENDPOINT="https://myOpenAIResource.openai.azure.com/"
export AZURE_OPENAI_API_KEY="<your_api_key>"
output
Login succeeded.
Resource group 'myResourceGroup' created.
Azure OpenAI resource 'myOpenAIResource' created.

Step by step responsible AI controls

Use Azure OpenAI Studio and SDK to enable content filtering and monitor usage. Configure Azure Policy to enforce responsible AI practices and enable Microsoft Purview for data governance and audit logging.

python
from azure.identity import DefaultAzureCredential
from azure.ai.openai import OpenAIClient
import os

endpoint = os.environ["AZURE_OPENAI_ENDPOINT"]
credential = DefaultAzureCredential()
client = OpenAIClient(endpoint, credential)

# Example: Check content moderation
response = client.moderations.create(
    input="Text to check for policy violations"
)
print("Moderation results:", response.results)

# Usage monitoring and policy enforcement are configured in Azure portal and Azure Policy
# Audit logs available via Microsoft Purview and Azure Monitor
output
Moderation results: [ModerationResult(category_scores=..., flagged=False)]

Common variations

  • Use API key authentication instead of DefaultAzureCredential for scripts.
  • Integrate with Azure Monitor for real-time usage alerts.
  • Apply custom content filters via Azure OpenAI Studio settings.
  • Use asynchronous SDK calls for scalable applications.
python
from azure.ai.openai.aio import OpenAIClient as AsyncOpenAIClient
import asyncio
from azure.identity import DefaultAzureCredential

async def moderate_text_async(text: str):
    client = AsyncOpenAIClient(os.environ["AZURE_OPENAI_ENDPOINT"],
                               credential=DefaultAzureCredential())
    response = await client.moderations.create(input=text)
    print("Async moderation results:", response.results)

asyncio.run(moderate_text_async("Check this text asynchronously."))
output
Async moderation results: [ModerationResult(category_scores=..., flagged=False)]

Troubleshooting common issues

  • If you receive authentication errors, verify your Azure credentials and environment variables.
  • Content moderation may flag false positives; adjust filters in Azure OpenAI Studio accordingly.
  • Audit logs not appearing? Ensure Microsoft Purview and Azure Monitor are properly configured.
  • API rate limits can cause failures; implement retry logic and monitor usage quotas.

Key Takeaways

  • Use Azure OpenAI Studio to configure content moderation and responsible AI policies.
  • Leverage Azure Policy and Microsoft Purview for compliance and audit logging.
  • Authenticate securely with Azure Active Directory or API keys for SDK access.
  • Monitor usage and handle rate limits proactively to maintain service reliability.
Verified 2026-04 · gpt-4o, gpt-4o-mini
Verify ↗