ModuleFormer

Форк
0

год назад
год назад
год назад
год назад
год назад
README.md

ModuleFormer

ModuleFormer is a MoE-based architecture that includes two different types of experts: stick-breaking attention heads and feedforward experts. Different experts are sparsely activated conditions on the input token during training and inference. In our experiment, we found that the sparse architecture enables three important abilities for large pre-trained language models:

  1. Efficiency, since ModuleFormer only activates a subset of its experts for each input token, thus it could achieve the same performance as dense LLMs with more than two times throughput;
  2. Extendability, ModuleFormer is more immune to catastrophic forgetting than dense LLMs and can be easily extended with new experts to learn new knowledge that is not included in the training data;
  3. Specialisation, finetuning ModuleFormer could specialize a subset of experts to the finetuning task, and the task-unrelated experts could be easily pruned for a lightweight deployment.

MoLM is a collection of ModuleFormer-based language models ranging in scale from 4 billion to 8 billion parameters.

Model Usage To load the models, you need install this package:

pip install -e .

Then you can load the model with the following code:

from transformers import AutoTokenizer, AutoModelForCausalLM, AutoConfig, AutoModelForSequenceClassification
from moduleformer import ModuleFormerForCausalLM, ModuleFormerConfig, ModuleFormerForSequenceClassification
AutoConfig.register("moduleformer", ModuleFormerConfig)
AutoModelForCausalLM.register(ModuleFormerConfig, ModuleFormerForCausalLM)
AutoModelForSequenceClassification.register(ModuleFormerConfig, ModuleFormerForSequenceClassification)

tokenizer = AutoTokenizer.from_pretrained('ibm/MoLM-350M-4B')
model = AutoModelForCausalLM.from_pretrained('ibm/MoLM-350M-4B')

Model Details MoLM-350M-4B is a MoE-based language model. It has 4 billion parameters, but each input token only activates 350M parameters. Thus, it's computationally equivalent to a 350M dense model. MoLM-700M-4B has 4 billion parameters and is computationally equivalent to a 700M dense model. MoLM-700M-8B has 8 billion parameters and is computationally equivalent to a 700M dense model. All models are trained on 300 billion tokens from publicly available sources. All models are trained on 300 billion tokens from publicly available sources, with a learning rate of 3.0 x 10-4 and a global batch-size of 3M tokens.

Model Developers IBM

Variations MoLM comes in two different parameter sizes — 4B and 8B. The 4B models has two variants with different computation cost — 350M and 700M.

Input Models input text only.

Output Models generate text only.

Model Architecture MoLM is an auto-regressive language model that uses the ModuleFormer architecture. It has 16 attention modules in each attention layer and 32 MLP modules in each MLP layer. During inference, in each layer, MoLM-350M-4B and MoLM-700M-8B activate 2 modules for each token, while MoLM-700M-4B activate 4 modules. MoLM-350M-4B and MoLM-700M-4B has 24 blocks and MoLM-700M-8B has 48 blocks.

Status This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback.

Research Paper "ModuleFormer: Modularity Emerges from Mixture-of-Experts"

Training Data

MoLM models are pretrained on 300 billion tokens of data from publicly available sources.

Evaluation Results

In this section, we report the results for the MoLM models on standard academic benchmarks. For all the evaluations, we use LM evaluations Harness.

ModelLatencyMemoryThroughputHellaswagPIQAARC-eARC-cOBQA
msGBtokens/secaccaccaccaccacc
Pythia 410M554255959433.7266.7051.8921.4218.2
GPT-Neo 1.3B991233285738.6671.1156.1923.1221.4
Pythia 1.4B918423555940.4170.8460.5226.1122.2
MoLM-350M-4B497277101739.2170.1356.4423.5520.8
GPT-Neo 2.7B1737351878842.7172.261.0727.4723.2
Pythia 2.8B2111701552245.3473.9964.3529.3523.8
MoLM-700M-4B863273993142.2073.0160.8225.9422.6
MoLM-700M-8B939383741943.3372.9162.4627.9023.8
ModelTriviaQAHumanEvalWikitext
0-shot1-shot5-shotpass@1pass@10pass@100PPL
Pythia 410M2.325.026.421.203.859.9820.09
GPT-Neo 1.3B5.248.019.743.626.8714.5016.16
Pythia 1.4B5.309.8712.842.197.3114.3314.71
MoLM-350M-4B5.4011.1213.703.046.9913.7915.15
GPT-Neo 2.7B4.8211.2313.674.899.5417.9013.93
Pythia 2.8B7.3815.5818.984.9111.7621.5412.68
MoLM-700M-4B9.0714.2416.495.5010.6520.2713.20
MoLM-700M-8B11.4716.7320.755.5112.5820.4012.97

Ethical Considerations and Limitations

MoLM is a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, MoLM’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of MoLM, developers should perform safety testing and tuning tailored to their specific applications of the model.

Citation

Please cite the following paper if you use the data or code in this repo.

@article{shen2023moduleformer,
  title={ModuleFormer: Learning Modular Large Language Models From Uncurated Data},
  author={Shen, Yikang and Zhang, Zheyu and Cao, Tianyou and Tan, Shawn and Chen, Zhenfang and Gan, Chuang},
  journal={arXiv preprint arXiv:2306.04640},
  year={2023}
}

MoLM Model Index

ModelMoLM
350M-4BLink
700M-4BLink
700M-8BLink

Описание

ModuleFormer is a MoE-based architecture that includes two different types of experts: stick-breaking attention heads and feedforward experts. We released a collection of ModuleFormer-based Language Models (MoLM) ranging in scale from 4 billion to 8 billion parameters.

Языки

Python

Сообщить о нарушении

Использование cookies

Мы используем файлы cookie в соответствии с Политикой конфиденциальности и Политикой использования cookies.

Нажимая кнопку «Принимаю», Вы даете АО «СберТех» согласие на обработку Ваших персональных данных в целях совершенствования нашего веб-сайта и Сервиса GitVerse, а также повышения удобства их использования.

Запретить использование cookies Вы можете самостоятельно в настройках Вашего браузера.