Сортировать по
Язык: Все
Топик: qlora
Unify Efficient Fine-tuning of 100+ LLMs
llmgptlanguage-modelagentbaichuanchatglmfine-tuninggenerative-aiinstruction-tuninglarge-language-modelsllamaloramistralmixture-of-expertspeftqloraquantizationqwenrlhftransformers- Python
01Обновлено 8 месяцев назад
🐳 Aurora is a [Chinese Version] MoE model. Aurora is a further work based on Mixtral-8x7B, which activates the chat capability of the model's Chinese open domain.
llmlarge-language-modelslanguage-modelgptfine-tuninglorainstruction-tuningqlorachinesemixtralmixtral-8x7bmixtral-8x7b-instruct- Python
01Обновлено 7 месяцев назад