A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit floating point (FP8) precision on Hopper and Ada GPUs, to provide better performance with lower memory utilization in both training and inference.
- Python
01Обновлено 5 месяцев назад
A high-throughput and memory-efficient inference and serving engine for LLMs
- Python
00Обновлено 8 месяцев назад
Samples for CUDA Developers which demonstrates features in CUDA Toolkit
- C
00Обновлено 2 месяца назад