How to beginner · 4 min read

How to use Azure OpenAI with LangChain

Quick answer
Use the AzureOpenAI class from the openai package to connect to Azure OpenAI, then integrate it with LangChain by passing the client to ChatOpenAI. Set environment variables for AZURE_OPENAI_API_KEY, AZURE_OPENAI_ENDPOINT, and deployment name, then call LangChain's ChatOpenAI with the Azure client for chat completions.

PREREQUISITES

  • Python 3.8+
  • Azure OpenAI API key
  • pip install openai>=1.0
  • pip install langchain_openai>=0.2

Setup

Install the required Python packages and set environment variables for Azure OpenAI credentials and deployment name.

  • Install packages: openai and langchain_openai.
  • Set environment variables: AZURE_OPENAI_API_KEY, AZURE_OPENAI_ENDPOINT, and AZURE_OPENAI_DEPLOYMENT.
bash
pip install openai langchain_openai

Step by step

This example shows how to create an AzureOpenAI client, then use it with LangChain's ChatOpenAI to generate a chat completion.

python
import os
from openai import AzureOpenAI
from langchain_openai import ChatOpenAI

# Initialize AzureOpenAI client with environment variables
azure_client = AzureOpenAI(
    api_key=os.environ["AZURE_OPENAI_API_KEY"],
    azure_endpoint=os.environ["AZURE_OPENAI_ENDPOINT"],
    api_version="2024-02-01"
)

# Create LangChain ChatOpenAI instance using the Azure client
chat = ChatOpenAI(client=azure_client, deployment_name=os.environ["AZURE_OPENAI_DEPLOYMENT"])

# Define prompt
prompt = "Explain the benefits of using Azure OpenAI with LangChain."

# Invoke chat completion
response = chat.invoke([{"role": "user", "content": prompt}])

print(response.content)
output
Azure OpenAI integration with LangChain enables seamless access to powerful language models hosted on Azure, allowing you to build scalable and secure AI applications with ease.

Common variations

You can use async calls with LangChain by importing AsyncChatOpenAI and calling achat.invoke(). For streaming responses, LangChain supports streaming with callbacks. You can also switch to different Azure OpenAI deployments by changing the deployment_name parameter.

python
import asyncio
from langchain_openai import AsyncChatOpenAI

async def async_example():
    azure_client = AzureOpenAI(
        api_key=os.environ["AZURE_OPENAI_API_KEY"],
        azure_endpoint=os.environ["AZURE_OPENAI_ENDPOINT"],
        api_version="2024-02-01"
    )
    achat = AsyncChatOpenAI(client=azure_client, deployment_name=os.environ["AZURE_OPENAI_DEPLOYMENT"])
    response = await achat.invoke([{"role": "user", "content": "What is LangChain?"}])
    print(response.content)

asyncio.run(async_example())
output
LangChain is a framework for building applications with large language models, enabling chaining of prompts, agents, and memory.

Troubleshooting

  • If you get authentication errors, verify your AZURE_OPENAI_API_KEY and AZURE_OPENAI_ENDPOINT environment variables are set correctly.
  • Ensure your deployment name matches exactly the Azure OpenAI deployment name.
  • If you see Model not found errors, confirm the deployment supports the requested model and API version.

Key Takeaways

  • Use AzureOpenAI from openai to connect LangChain to Azure OpenAI.
  • Set AZURE_OPENAI_API_KEY, AZURE_OPENAI_ENDPOINT, and deployment name as environment variables.
  • Pass the Azure client to LangChain's ChatOpenAI with the deployment name for chat completions.
  • Async and streaming calls are supported via LangChain's async interfaces.
  • Check deployment names and API versions carefully to avoid errors.
Verified 2026-04 · gpt-4o, gpt-4o-mini
Verify ↗