How to beginner · 3 min read

How to use Azure OpenAI Assistants API

Quick answer
Use the AzureOpenAI client from the openai Python package with your Azure endpoint and deployment name to call the Assistants API. Create chat completions by calling client.chat.completions.create() with your assistant's deployment and messages.

PREREQUISITES

  • Python 3.8+
  • Azure OpenAI resource with deployed assistant
  • Azure OpenAI API key
  • pip install openai>=1.0

Setup

Install the official openai Python package version 1.0 or higher. Set environment variables for your Azure OpenAI API key, endpoint, and deployment name. The deployment name corresponds to your assistant model deployment in Azure.

bash
pip install openai>=1.0

Step by step

Use the AzureOpenAI client to connect to your Azure OpenAI resource. Provide your API key, Azure endpoint URL, and specify the API version. Then call chat.completions.create() with your assistant deployment and chat messages.

python
import os
from openai import AzureOpenAI

client = AzureOpenAI(
    api_key=os.environ["AZURE_OPENAI_API_KEY"],
    azure_endpoint=os.environ["AZURE_OPENAI_ENDPOINT"],
    api_version="2024-02-01"
)

response = client.chat.completions.create(
    model=os.environ["AZURE_OPENAI_DEPLOYMENT"],
    messages=[{"role": "user", "content": "Hello, how can you assist me today?"}]
)

print(response.choices[0].message.content)
output
Hello! I can help you with a variety of tasks such as answering questions, generating text, or providing recommendations. What would you like to do?

Common variations

You can use different assistant deployments by changing the model parameter to another deployment name. For streaming responses, set stream=True in chat.completions.create(). Async usage is supported with asyncio and await syntax.

python
import asyncio
from openai import AzureOpenAI

async def async_chat():
    client = AzureOpenAI(
        api_key=os.environ["AZURE_OPENAI_API_KEY"],
        azure_endpoint=os.environ["AZURE_OPENAI_ENDPOINT"],
        api_version="2024-02-01"
    )

    response = await client.chat.completions.acreate(
        model=os.environ["AZURE_OPENAI_DEPLOYMENT"],
        messages=[{"role": "user", "content": "Give me a quick summary of Azure OpenAI Assistants."}],
        stream=True
    )

    async for chunk in response:
        print(chunk.choices[0].delta.get("content", ""), end="", flush=True)

asyncio.run(async_chat())
output
Azure OpenAI Assistants provide conversational AI capabilities integrated with Azure services, enabling developers to build intelligent chatbots and virtual assistants.

Troubleshooting

  • If you get authentication errors, verify your AZURE_OPENAI_API_KEY and AZURE_OPENAI_ENDPOINT environment variables are set correctly.
  • If you see deployment not found errors, confirm your deployment name matches AZURE_OPENAI_DEPLOYMENT and that the deployment exists in your Azure OpenAI resource.
  • For network issues, ensure your environment can reach the Azure endpoint URL.

Key Takeaways

  • Use the AzureOpenAI client with your Azure endpoint and deployment name to call the Assistants API.
  • Set environment variables for API key, endpoint, and deployment to keep credentials secure.
  • Support for streaming and async calls enables responsive and scalable assistant interactions.
Verified 2026-04 · gpt-4o, AzureOpenAI
Verify ↗