Unify Efficient Fine-tuning of 100+ LLMs
llmgptlanguage-modelagentbaichuanchatglmfine-tuninggenerative-aiinstruction-tuninglarge-language-modelsllamaloramistralmixture-of-expertspeftqloraquantizationqwenrlhftransformers- Python
01Обновлено 8 месяцев назад
🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.
- Python
01Обновлено 8 месяцев назад
🐳 Aurora is a [Chinese Version] MoE model. Aurora is a further work based on Mixtral-8x7B, which activates the chat capability of the model's Chinese open domain.
llmlarge-language-modelslanguage-modelgptfine-tuninglorainstruction-tuningqlorachinesemixtralmixtral-8x7bmixtral-8x7b-instruct- Python
01Обновлено 7 месяцев назад
BELLE: Be Everyone's Large Language model Engine(开源中文对话大模型)
- HTML
00Обновлено 7 месяцев назад
LLM Finetuning with peft
- Jupyter Notebook
00Обновлено 7 месяцев назад