В праздничные дни с 29.12 по 08.01 техническая поддержка отдыхает, но на наиболее важные вопросы постараемся ответить. Счастливого Нового Года!
gitverse new year логотип

MERA

Форк
0
Зеркало из https://github.com/ai-forever/MERA

README.md

WARNING! This is the deprecated version. The new MERA datasets are here and the codebase is here. The old leaderboard is frozen and may be found here. It is no longer accessible for uploading submissions. All new submissions are made following the instructions on new leaderboard.

MERA

MERA

License Release

MERA (Multimodal Evaluation for Russian-language Architectures) is a new open benchmark for the Russian language for evaluating fundamental models.

About MERA

MERA benchmark brings together all industry and academic players in one place to study the capabilities of fundamental models, draw attention to AI problems, develop collaboration within the Russian Federation and in the international arena and create an independent unified system for measuring all current models. This repository is a customized version of original Language Model Evaluation Harness (LM-Harness

v0.3.0
).

Our contributions to this project are:

  • Instruction-based tasks available on 🤗 HuggingFace dataset card.
  • Customized version of LM-Harness evaluation code for models (
    v0.3.0
    ).
  • Benchmark website with the Leaderboard and the scoring submission system.
  • Baselines of the open models and Human Benchmark.

The MERA benchmark includes 21 text tasks (17 base tasks + 4 diagnostic tasks). See the task-table for a complete list.

NameTask NameTask TypeTest SizeN-shotsMetrics
MathLogicQAmathlogicqaMath, Logic11435Acc
MultiQmultiqReasoning9000EM / F1
PARusparusCommon Sense5000Acc
RCBrcbNLI4380Acc / F1_macro
ruModArrumodarMath, Logic60000Acc
ruMultiArrumultiarMath10245Acc
ruOpenBookQAruopenbookqaWorld Knowledge4005Acc / F1_macro
ruTiErutieReasoning, Dialogue Context, Memory4300Acc
ruWorldTreeruworldtreeWorld Knowledge5255Acc / F1_macro
RWSDrwsdReasoning2600Acc
SimpleArsimplearMath10005Acc
BPSbpsCode, Math10002Acc
CheGeKachegekaWorld Knowledge4164EM / F1
LCSlcsCode, Math5002Acc
ruHumanEvalruhumanevalCode1640Pass@k
ruMMLUrummluReasoning9615Acc
USEuseExam9000Grade_norm
ruDetoxrudetoxEthics8000J(STA, SIM, FL)
ruEthicsruethicsEthics193505 MCC
ruHateSpeechruhatespeechEthics2650Acc
ruHHHruhhhEthics1780Acc

Our aim is to evaluate all the models:

  • in the same scenarios;
  • using the same metrics;
  • with the same adaptation strategy (e.g., prompting);
  • provide an opportunity to make controlled and clear comparisons.

MERA is a collaborative project created in a union of industry and academia with the support of all the companies, that are creating the foundation models, to ensure fair and transparent leaderboards for the models evaluation.

We express our gratitude to our team and partners:

SberDevices, Sber AI, Yandex, Skoltech AI, MTS AI, NRU HSE, Russian Academy of Sciences, etc.

Powered by Aliance AI

Contents

The repository has the following structure:

  • examples
    — the examples of loading and using data.
  • humanbenchmarks
    — materials and code for human evaluation.
  • modules
    — the examples of scoring scripts that are used on the website for scoring your submission.
  • lm-evaluation-harness
    — a framework for few-shot evaluation of language models.

The process of submission is the following:

  • to view the datasets use the HuggingFace preview or run the prepared instruction;
  • clone MERA benchmark repository;
  • to get submission files use shell script and the provided customized lm-harness code (the actual model is not required for submission and evaluation).
  • run your model on the all datasets using the code of lm-harness: the result of the code is the archive in ZIP format for the submission;
  • register on the website;
  • upload the submission files (ZIP) via the platform interface for the automatic assessment.

Note that, the evaluation result is then displayed in the user's account and is kept private. Those who want to make their submission results public could use the ''Publish'' function. After validation of the submission is approved, the model's overall score will be shown publicly. The parameters of the generation, prompts and few-shot/zero-shot are fixed. You can vary them for your own purposes. If you want to submit your results on the public leaderboard check that these parameters are the same and please add the logs. We have to be sure that the scenarios for the models evaluation are the same and reproducible.

We provide the sample submission for you to check the format.

The process of the whole MERA evaluation is described on the Figure:

evaluation setup


📌 It’s the first text version of the benchmark. We are to expand and develop it in the future with new tasks and multimodality.

Feel free to ask any questions regarding our work, write on email mera@a-ai.ru. If you have ideas and new tasks feel free to suggest them, it’s important! If you see any bugs, or you know how to make the code better please suggest the fixes via pull-requests and issues in this official github 🤗. We will be glad to get the feedback in any way.

Cite as

@inproceedings{fenogenova-etal-2024-mera, title = "{MERA}: A Comprehensive {LLM} Evaluation in {R}ussian", author = "Fenogenova, Alena and Chervyakov, Artem and Martynov, Nikita and Kozlova, Anastasia and Tikhonova, Maria and Akhmetgareeva, Albina and Emelyanov, Anton and Shevelev, Denis and Lebedev, Pavel and Sinev, Leonid and Isaeva, Ulyana and Kolomeytseva, Katerina and Moskovskiy, Daniil and Goncharova, Elizaveta and Savushkin, Nikita and Mikhailova, Polina and Minaeva, Anastasia and Dimitrov, Denis and Panchenko, Alexander and Markov, Sergey", editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek", booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.acl-long.534", doi = "10.18653/v1/2024.acl-long.534", pages = "9920--9948", }

Описание

Языки

Jupyter Notebook

  • Python
  • Shell
Сообщить о нарушении

Использование cookies

Мы используем файлы cookie в соответствии с Политикой конфиденциальности и Политикой использования cookies.

Нажимая кнопку «Принимаю», Вы даете АО «СберТех» согласие на обработку Ваших персональных данных в целях совершенствования нашего веб-сайта и Сервиса GitVerse, а также повышения удобства их использования.

Запретить использование cookies Вы можете самостоятельно в настройках Вашего браузера.