[Doc] Documentation on supported hardware for quantization methods (#5745)
This commit is contained in:
parent
bd620b01fb
commit
5b15bde539
@ -100,6 +100,7 @@ Documentation
|
|||||||
:maxdepth: 1
|
:maxdepth: 1
|
||||||
:caption: Quantization
|
:caption: Quantization
|
||||||
|
|
||||||
|
quantization/supported_hardware
|
||||||
quantization/auto_awq
|
quantization/auto_awq
|
||||||
quantization/fp8
|
quantization/fp8
|
||||||
quantization/fp8_e5m2_kvcache
|
quantization/fp8_e5m2_kvcache
|
||||||
|
|||||||
@ -3,7 +3,9 @@
|
|||||||
FP8
|
FP8
|
||||||
==================
|
==================
|
||||||
|
|
||||||
vLLM supports FP8 (8-bit floating point) computation using hardware acceleration on GPUs such as Nvidia H100 and AMD MI300x. Currently, only Hopper and Ada Lovelace GPUs are supported. Quantization of models with FP8 allows for a 2x reduction in model memory requirements and up to a 1.6x improvement in throughput with minimal impact on accuracy.
|
vLLM supports FP8 (8-bit floating point) weight and activation quantization using hardware acceleration on GPUs such as Nvidia H100 and AMD MI300x.
|
||||||
|
Currently, only Hopper and Ada Lovelace GPUs are supported.
|
||||||
|
Quantization of models with FP8 allows for a 2x reduction in model memory requirements and up to a 1.6x improvement in throughput with minimal impact on accuracy.
|
||||||
|
|
||||||
Please visit the HF collection of `quantized FP8 checkpoints of popular LLMs ready to use with vLLM <https://huggingface.co/collections/neuralmagic/fp8-llms-for-vllm-666742ed2b78b7ac8df13127>`_.
|
Please visit the HF collection of `quantized FP8 checkpoints of popular LLMs ready to use with vLLM <https://huggingface.co/collections/neuralmagic/fp8-llms-for-vllm-666742ed2b78b7ac8df13127>`_.
|
||||||
|
|
||||||
|
|||||||
30
docs/source/quantization/supported_hardware.rst
Normal file
30
docs/source/quantization/supported_hardware.rst
Normal file
@ -0,0 +1,30 @@
|
|||||||
|
.. _supported_hardware_for_quantization:
|
||||||
|
|
||||||
|
Supported Hardware for Quantization Kernels
|
||||||
|
===========================================
|
||||||
|
|
||||||
|
The table below shows the compatibility of various quantization implementations with different hardware platforms in vLLM:
|
||||||
|
|
||||||
|
============== ====== ======= ======= ===== ====== ======= ========= ======= ============== ==========
|
||||||
|
Implementation Volta Turing Ampere Ada Hopper AMD GPU Intel GPU x86 CPU AWS Inferentia Google TPU
|
||||||
|
============== ====== ======= ======= ===== ====== ======= ========= ======= ============== ==========
|
||||||
|
AQLM ✅ ✅ ✅ ✅ ✅ ❌ ❌ ❌ ❌ ❌
|
||||||
|
AWQ ❌ ✅ ✅ ✅ ✅ ❌ ❌ ❌ ❌ ❌
|
||||||
|
DeepSpeedFP ✅ ✅ ✅ ✅ ✅ ❌ ❌ ❌ ❌ ❌
|
||||||
|
FP8 ❌ ❌ ❌ ✅ ✅ ❌ ❌ ❌ ❌ ❌
|
||||||
|
Marlin ❌ ❌ ✅ ✅ ✅ ❌ ❌ ❌ ❌ ❌
|
||||||
|
GPTQ ✅ ✅ ✅ ✅ ✅ ❌ ❌ ❌ ❌ ❌
|
||||||
|
SqueezeLLM ✅ ✅ ✅ ✅ ✅ ❌ ❌ ❌ ❌ ❌
|
||||||
|
bitsandbytes ✅ ✅ ✅ ✅ ✅ ❌ ❌ ❌ ❌ ❌
|
||||||
|
============== ====== ======= ======= ===== ====== ======= ========= ======= ============== ==========
|
||||||
|
|
||||||
|
Notes:
|
||||||
|
^^^^^^
|
||||||
|
|
||||||
|
- Volta refers to SM 7.0, Turing to SM 7.5, Ampere to SM 8.0/8.6, Ada to SM 8.9, and Hopper to SM 9.0.
|
||||||
|
- "✅" indicates that the quantization method is supported on the specified hardware.
|
||||||
|
- "❌" indicates that the quantization method is not supported on the specified hardware.
|
||||||
|
|
||||||
|
Please note that this compatibility chart may be subject to change as vLLM continues to evolve and expand its support for different hardware platforms and quantization methods.
|
||||||
|
|
||||||
|
For the most up-to-date information on hardware support and quantization methods, please check the `quantization directory <https://github.com/vllm-project/vllm/tree/main/vllm/model_executor/layers/quantization>`_ or consult with the vLLM development team.
|
||||||
Loading…
Reference in New Issue
Block a user