Unify Efficient Fine-tuning of 100+ LLMs
llmgptlanguage-modelagentbaichuanchatglmfine-tuninggenerative-aiinstruction-tuninglarge-language-modelsllamaloramistralmixture-of-expertspeftqloraquantizationqwenrlhftransformers- Python
01Обновлено 8 месяцев назад
Lightweight demos for finetuning of instruct LLMs. Powered by transformers/accelerate and open-source datasets.
- Jupyter Notebook
01Обновлено 7 месяцев назад
🐳 Aurora is a [Chinese Version] MoE model. Aurora is a further work based on Mixtral-8x7B, which activates the chat capability of the model's Chinese open domain.
llmlarge-language-modelslanguage-modelgptfine-tuninglorainstruction-tuningqlorachinesemixtralmixtral-8x7bmixtral-8x7b-instruct- Python
01Обновлено 7 месяцев назад
[NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.
llamachatgptllama2chatbotgpt-4instruction-tuningvision-language-modelvisual-language-learningfoundation-modelsllama-2llavamulti-modalitymultimodal- Python
00Обновлено 7 месяцев назад
🦦 Otter, a multi-modal model based on OpenFlamingo (open-sourced version of DeepMind's Flamingo), trained on MIMIC-IT and showcasing improved instruction-following and in-context learning ability.
machine-learningdeep-learningchatgptgpt-4instruction-tuningvisual-language-learningfoundation-modelsmulti-modalityapple-vision-proartificial-inteligenceegocentric-visionembodiedembodied-ailarge-scale-models- Python
00Обновлено 7 месяцев назад