vllm/docs/source/quantization
2024-10-09 10:28:08 -06:00
..
auto_awq.rst [Doc] fix the autoAWQ example (#7937) 2024-08-28 12:12:32 +00:00
bnb.rst [[Misc]Upgrade bitsandbytes to the latest version 0.44.0 (#8768) 2024-09-24 17:08:55 -07:00
fp8_e4m3_kvcache.rst [Core/Bugfix] Add FP8 K/V Scale and dtype conversion for prefix/prefill Triton Kernel (#7208) 2024-08-12 22:47:41 +00:00
fp8_e5m2_kvcache.rst [Core/Bugfix] Add FP8 K/V Scale and dtype conversion for prefix/prefill Triton Kernel (#7208) 2024-08-12 22:47:41 +00:00
fp8.rst Add lm-eval directly to requirements-test.txt (#9161) 2024-10-08 18:22:31 -07:00
gguf.rst [Doc] Add documentation for GGUF quantization (#8618) 2024-09-19 13:15:55 -06:00
int8.rst [Doc] Add docs for llmcompressor INT8 and FP8 checkpoints (#7444) 2024-08-16 13:59:16 -07:00
supported_hardware.rst [Hardware][CPU] Support AWQ for CPU backend (#7515) 2024-10-09 10:28:08 -06:00