Mixture-of-Experts for Large Vision-Language Models
- Python
00Обновлено 7 месяцев назад
ModuleFormer is a MoE-based architecture that includes two different types of experts: stick-breaking attention heads and feedforward experts. We released a collection of ModuleFormer-based Language Models (MoLM) ranging in scale from 4 billion to 8 billion parameters.
- Python
00Обновлено 7 месяцев назад