Mistral API pricing
Quick answer
The Mistral API pricing is usage-based, typically charged per 1,000 tokens processed with rates varying by model size and capability. You can access mistral-large-latest and mistral-small-latest models via the OpenAI-compatible API, with detailed pricing available on Mistral's official website.
PREREQUISITES
Python 3.8+MISTRAL_API_KEY environment variable setpip install openai>=1.0
Setup
Install the openai Python package to interact with the Mistral API using the OpenAI-compatible SDK. Set your MISTRAL_API_KEY as an environment variable for authentication.
pip install openai>=1.0 Step by step
Use the OpenAI-compatible OpenAI client to call Mistral models. Below is a complete example to generate a chat completion with mistral-large-latest.
import os
from openai import OpenAI
client = OpenAI(api_key=os.environ["MISTRAL_API_KEY"], base_url="https://api.mistral.ai/v1")
response = client.chat.completions.create(
model="mistral-large-latest",
messages=[{"role": "user", "content": "Explain the benefits of using Mistral API."}]
)
print(response.choices[0].message.content) output
Mistral API offers high-performance language models with competitive pricing, enabling developers to integrate advanced AI capabilities efficiently.
Common variations
You can switch models to mistral-small-latest for lower cost or faster responses. The API supports streaming responses and asynchronous calls using standard OpenAI SDK patterns.
import asyncio
import os
from openai import OpenAI
async def async_chat():
client = OpenAI(api_key=os.environ["MISTRAL_API_KEY"], base_url="https://api.mistral.ai/v1")
response = await client.chat.completions.acreate(
model="mistral-small-latest",
messages=[{"role": "user", "content": "Summarize the Mistral API pricing."}]
)
print(response.choices[0].message.content)
asyncio.run(async_chat()) output
Mistral API pricing is based on token usage, with smaller models costing less per 1,000 tokens processed.
Troubleshooting
- If you receive authentication errors, verify your
MISTRAL_API_KEYenvironment variable is set correctly. - For rate limit errors, check your usage and consider upgrading your plan or reducing request frequency.
- Ensure you use the correct
base_urlhttps://api.mistral.ai/v1for API calls.
Key Takeaways
- Use the OpenAI-compatible OpenAI SDK with base_url="https://api.mistral.ai/v1" to access Mistral models.
- Mistral API pricing is usage-based, charged per 1,000 tokens with rates varying by model size.
- Set your MISTRAL_API_KEY in environment variables to authenticate requests securely.