Code for our EMNLP 2023 Paper: "LLM-Adapters: An Adapter Family for Parameter-Efficient Fine-Tuning of Large Language Models"
- Python
01Обновлено 7 месяцев назад
LlamaIndex is a data framework for your LLM applications
- Python
01Обновлено 8 месяцев назад
🐳 Aurora is a [Chinese Version] MoE model. Aurora is a further work based on Mixtral-8x7B, which activates the chat capability of the model's Chinese open domain.
llmlarge-language-modelslanguage-modelgptfine-tuninglorainstruction-tuningqlorachinesemixtralmixtral-8x7bmixtral-8x7b-instruct- Python
01Обновлено 7 месяцев назад
Unify Efficient Fine-tuning of 100+ LLMs
llmgptlanguage-modelagentbaichuanchatglmfine-tuninggenerative-aiinstruction-tuninglarge-language-modelsllamaloramistralmixture-of-expertspeftqloraquantizationqwenrlhftransformers- Python
01Обновлено 8 месяцев назад
H2O LLM Studio - a framework and no-code GUI for fine-tuning LLMs. Documentation: https://h2oai.github.io/h2o-llmstudio/
- Python
00Обновлено 5 месяцев назад
Operating LLMs in production
llmaillamafine-tuningmistralmlopsllama2llm-inferencellm-servingllmopsmlfalconbentomlllm-opsmodel-inferencemptopen-source-llmopenllmstablelmvicuna- Python
00Обновлено 7 месяцев назад
A comprehensive guide to building RAG-based LLM applications for production.
- Jupyter Notebook
00Обновлено 7 месяцев назад
LLM Finetuning with peft
- Jupyter Notebook
00Обновлено 7 месяцев назад
Easy and lightning fast training of 🤗 Transformers on Habana Gaudi processor (HPU)
- Python
00Обновлено 7 месяцев назад