Making large AI models cheaper, faster and more accessible
aideep-learninginferencefoundation-modelsmodel-parallelismpipeline-parallelismdata-parallelismbig-modeldistributed-computingheterogeneous-traininghpclarge-scale- Python
01Обновлено 7 месяцев назад
[NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.
llamachatgptllama2chatbotgpt-4instruction-tuningvision-language-modelvisual-language-learningfoundation-modelsllama-2llavamulti-modalitymultimodal- Python
00Обновлено 7 месяцев назад
🦦 Otter, a multi-modal model based on OpenFlamingo (open-sourced version of DeepMind's Flamingo), trained on MIMIC-IT and showcasing improved instruction-following and in-context learning ability.
machine-learningdeep-learningchatgptgpt-4instruction-tuningvisual-language-learningfoundation-modelsmulti-modalityapple-vision-proartificial-inteligenceegocentric-visionembodiedembodied-ailarge-scale-models- Python
00Обновлено 7 месяцев назад
Official implementation of BGPT @ ICLR 2024 paper "Meta Prompting for AI Systems" (https://arxiv.org/abs/2311.11482)
- Python
00Обновлено 6 месяцев назад