How to beginner · 3 min read

How to use Mistral with LangChain

Quick answer
Use the openai Python SDK with base_url="https://api.mistral.ai/v1" to connect to Mistral's API, then wrap it in LangChain's ChatOpenAI class by passing a custom client. This enables you to use Mistral models like mistral-large-latest seamlessly within LangChain chat chains.

PREREQUISITES

  • Python 3.8+
  • MISTRAL_API_KEY environment variable set
  • pip install openai>=1.0 langchain_openai

Setup

Install the required Python packages and set your Mistral API key as an environment variable.

  • Install packages: pip install openai langchain_openai
  • Set environment variable: export MISTRAL_API_KEY='your_api_key' (Linux/macOS) or set MISTRAL_API_KEY=your_api_key (Windows)
bash
pip install openai langchain_openai

Step by step

This example shows how to create an OpenAI-compatible client for Mistral, then use it with LangChain's ChatOpenAI to generate chat completions.

python
import os
from openai import OpenAI
from langchain_openai import ChatOpenAI

# Initialize OpenAI client with Mistral base URL and API key
mistral_client = OpenAI(
    api_key=os.environ["MISTRAL_API_KEY"],
    base_url="https://api.mistral.ai/v1"
)

# Wrap the Mistral client in LangChain's ChatOpenAI
chat = ChatOpenAI(
    client=mistral_client,
    model="mistral-large-latest",
    temperature=0.7
)

# Use LangChain chat interface
response = chat.invoke([{"role": "user", "content": "Hello, how can I use Mistral with LangChain?"}])
print(response.content)
output
Hello! You can integrate Mistral models with LangChain by using the OpenAI-compatible client with the Mistral API base URL and passing it to LangChain's ChatOpenAI class.

Common variations

You can use async calls with LangChain by importing AsyncChatOpenAI and awaiting the ainvoke method. To use streaming, set streaming=True in ChatOpenAI and handle the stream accordingly. You can also switch to other Mistral models like mistral-small-latest by changing the model parameter.

python
import asyncio
from langchain_openai import AsyncChatOpenAI
from openai import OpenAI

async def async_chat():
    mistral_client = OpenAI(
        api_key=os.environ["MISTRAL_API_KEY"],
        base_url="https://api.mistral.ai/v1"
    )
    chat = AsyncChatOpenAI(
        client=mistral_client,
        model="mistral-small-latest",
        temperature=0.5
    )
    response = await chat.ainvoke([{"role": "user", "content": "Tell me a joke."}])
    print(response.content)

asyncio.run(async_chat())
output
Why did the scarecrow win an award? Because he was outstanding in his field!

Troubleshooting

  • If you get authentication errors, verify your MISTRAL_API_KEY is set correctly in your environment.
  • For model not found errors, confirm you are using a valid Mistral model name like mistral-large-latest.
  • If requests time out, check your network connection and Mistral API status.

Key Takeaways

  • Use the OpenAI SDK with base_url="https://api.mistral.ai/v1" to access Mistral models.
  • Pass the OpenAI client to LangChain's ChatOpenAI to integrate Mistral seamlessly.
  • Support async and streaming by using LangChain's AsyncChatOpenAI and appropriate parameters.
  • Always set your MISTRAL_API_KEY in environment variables to avoid authentication issues.
Verified 2026-04 · mistral-large-latest, mistral-small-latest
Verify ↗