mteb

0
README.md

MTEB MTEB

Multimodal toolbox for evaluating embeddings and retrieval systems

GitHub release License Downloads

Installation | Usage | Leaderboard | Documentation | Citing

Installation

You can install mteb simply using pip. For more on installation please see the documentation.

Example Usage

Below we present a simple use-case example. For more information, see the documentation.

You can also run it using the CLI:

For more on how to use the CLI check out the related documentation.

Overview

Overview
📈 LeaderboardThe interactive leaderboard of the benchmark
Get Started.
🏃 Get StartedOverview of how to use mteb
🤖 Defining ModelsHow to use existing model and define custom ones
📋 Selecting tasksHow to select tasks, benchmarks, splits etc.
🏭 Running EvaluationHow to run the evaluations, including cache management, speeding up evaluations etc.
📊 Loading ResultsHow to load and work with existing model results
Overview.
📋 TasksOverview of available tasks
📐 BenchmarksOverview of available benchmarks
🤖 ModelsOverview of available Models
Contributing
🤖 Adding a modelHow to submit a model to MTEB and to the leaderboard
👩‍💻 Adding a datasetHow to add a new task/dataset to MTEB
👩‍💻 Adding a benchmarkHow to add a new benchmark to MTEB and to the leaderboard
🤝 ContributingHow to contribute to MTEB and set it up for development

Citing

MTEB was introduced in "MTEB: Massive Text Embedding Benchmark", and heavily expanded in "MMTEB: Massive Multilingual Text Embedding Benchmark". When using

mteb
, we recommend that you cite both articles.

Bibtex Citation (click to unfold)

If you use any of the specific benchmarks, we also recommend that you cite the authors of both the benchmark and its tasks: