Concept intermediate · 3 min read

What is MTEB benchmark for embeddings

Quick answer
The MTEB (Massive Text Embedding Benchmark) is a standardized benchmark that evaluates the performance of embedding models across a wide range of natural language tasks. It provides a unified framework to compare embeddings on tasks like retrieval, clustering, classification, and semantic search using diverse datasets.
Massive Text Embedding Benchmark (MTEB) is a comprehensive benchmark that evaluates text embedding models across multiple tasks to measure their generalization and effectiveness.

How it works

MTEB works by aggregating a diverse set of natural language processing tasks that test embedding models on various capabilities such as semantic search, clustering, classification, and retrieval. It acts like a decathlon for embeddings, measuring their performance across multiple datasets and task types to provide a holistic evaluation. This helps identify embeddings that generalize well beyond a single use case.

Concrete example

Using the mteb Python library, you can evaluate an embedding model on the benchmark easily. Below is an example that runs MTEB evaluation on a Hugging Face embedding model:

python
import os
from mteb import MTEB
from sentence_transformers import SentenceTransformer

# Load a Hugging Face embedding model
model = SentenceTransformer('all-MiniLM-L6-v2')

# Initialize MTEB benchmark
benchmark = MTEB(tasks=['all'])

# Run evaluation
results = benchmark.run(model.encode, save_dir='./mteb_results')

# Print summary
print(results['overall'])
output
{'embedding_size': 384, 'average_score': 0.75, 'task_scores': {...}}

When to use it

Use MTEB when you need an objective, comprehensive evaluation of embedding models across multiple NLP tasks to select the best model for your application. It is ideal for benchmarking semantic search, clustering, and classification embeddings. Avoid using it if you only need evaluation on a single, very specific dataset or task, as MTEB focuses on broad generalization.

Key terms

TermDefinition
EmbeddingA vector representation of text capturing semantic meaning.
Semantic searchRetrieving documents based on meaning rather than keyword matching.
ClusteringGrouping similar items together based on embeddings.
ClassificationAssigning categories to text based on embeddings.
BenchmarkA standardized test to compare model performance.

Key Takeaways

  • MTEB evaluates embedding models across diverse NLP tasks for comprehensive benchmarking.
  • Use MTEB to select embeddings that generalize well beyond single datasets or tasks.
  • The mteb Python library enables easy integration and evaluation of Hugging Face models.
  • MTEB covers semantic search, clustering, classification, and retrieval tasks.
  • Avoid MTEB if you only need evaluation on a narrow or domain-specific dataset.
Verified 2026-04 · all-MiniLM-L6-v2
Verify ↗