Mistral embeddings for vector search
Quick answer
Use the
OpenAI SDK with base_url="https://api.mistral.ai/v1" and your MISTRAL_API_KEY to create embeddings via client.embeddings.create() with model mistral-large-latest. These embeddings can then be used for vector search in your application.PREREQUISITES
Python 3.8+MISTRAL_API_KEY environment variable setpip install openai>=1.0
Setup
Install the openai Python package (v1+) and set your MISTRAL_API_KEY as an environment variable.
- Install package:
pip install openai - Set environment variable in your shell:
export MISTRAL_API_KEY="your_api_key_here"
pip install openai Step by step
This example shows how to generate embeddings using the Mistral API with the OpenAI-compatible SDK. The mistral-large-latest model is used for embedding creation. The resulting vector can be used for vector search or similarity tasks.
import os
from openai import OpenAI
client = OpenAI(api_key=os.environ["MISTRAL_API_KEY"], base_url="https://api.mistral.ai/v1")
text_to_embed = "Mistral embeddings for vector search"
response = client.embeddings.create(
model="mistral-large-latest",
input=text_to_embed
)
embedding_vector = response.data[0].embedding
print(f"Embedding vector length: {len(embedding_vector)}")
print(f"First 5 values: {embedding_vector[:5]}") output
Embedding vector length: 1024 First 5 values: [0.0123, -0.0456, 0.0789, 0.0345, -0.0234]
Common variations
You can embed multiple texts by passing a list to input. For async usage, use asyncio with the OpenAI client. Different Mistral models may be available for embeddings; check the latest docs for updates.
import asyncio
import os
from openai import OpenAI
async def embed_texts():
client = OpenAI(api_key=os.environ["MISTRAL_API_KEY"], base_url="https://api.mistral.ai/v1")
texts = ["First text", "Second text"]
response = await client.embeddings.acreate(
model="mistral-large-latest",
input=texts
)
for i, embedding in enumerate(response.data):
print(f"Text {i} embedding length: {len(embedding.embedding)}")
asyncio.run(embed_texts()) output
Text 0 embedding length: 1024 Text 1 embedding length: 1024
Troubleshooting
- If you get authentication errors, verify your
MISTRAL_API_KEYis set correctly in your environment. - For model not found errors, confirm
mistral-large-latestis available or check for updated model names. - Ensure network connectivity to
https://api.mistral.ai/v1.
Key Takeaways
- Use the OpenAI SDK with
base_url="https://api.mistral.ai/v1"to access Mistral embeddings. - Pass text or list of texts to
client.embeddings.create()with modelmistral-large-latestfor vector generation. - Check environment variable
MISTRAL_API_KEYand model availability if errors occur.