mteb
Зеркало из https://github.com/ai-forever/mteb
Описание
Языки
- Python99,9%
- Остальные0,1%
5 месяцев назад
год назад
6 месяцев назад
5 месяцев назад
5 месяцев назад
5 месяцев назад
6 месяцев назад
5 месяцев назад
4 года назад
5 месяцев назад
6 месяцев назад
6 месяцев назад
6 месяцев назад
5 месяцев назад
README.md
MTEB
Multimodal toolbox for evaluating embeddings and retrieval systems
Installation | Usage | Leaderboard | Documentation | Citing
Installation
You can install mteb simply using pip. For more on installation please see the documentation.
Example Usage
Below we present a simple use-case example. For more information, see the documentation.
You can also run it using the CLI:
For more on how to use the CLI check out the related documentation.
Overview
| Overview | |
|---|---|
| 📈 Leaderboard | The interactive leaderboard of the benchmark |
| Get Started. | |
| 🏃 Get Started | Overview of how to use mteb |
| 🤖 Defining Models | How to use existing model and define custom ones |
| 📋 Selecting tasks | How to select tasks, benchmarks, splits etc. |
| 🏭 Running Evaluation | How to run the evaluations, including cache management, speeding up evaluations etc. |
| 📊 Loading Results | How to load and work with existing model results |
| Overview. | |
| 📋 Tasks | Overview of available tasks |
| 📐 Benchmarks | Overview of available benchmarks |
| 🤖 Models | Overview of available Models |
| Contributing | |
| 🤖 Adding a model | How to submit a model to MTEB and to the leaderboard |
| 👩💻 Adding a dataset | How to add a new task/dataset to MTEB |
| 👩💻 Adding a benchmark | How to add a new benchmark to MTEB and to the leaderboard |
| 🤝 Contributing | How to contribute to MTEB and set it up for development |
Citing
MTEB was introduced in "MTEB: Massive Text Embedding Benchmark", and heavily expanded in "MMTEB: Massive Multilingual Text Embedding Benchmark". When using , we recommend that you cite both articles.mteb
Bibtex Citation (click to unfold)
If you use any of the specific benchmarks, we also recommend that you cite the authors of both the benchmark and its tasks:
