vllm/docs/source/quantization
2024-09-19 13:15:55 -06:00
..
auto_awq.rst [Doc] fix the autoAWQ example (#7937) 2024-08-28 12:12:32 +00:00
bnb.rst [bitsandbytes]: support read bnb pre-quantized model (#5753) 2024-07-23 23:45:09 +00:00
fp8_e4m3_kvcache.rst [Core/Bugfix] Add FP8 K/V Scale and dtype conversion for prefix/prefill Triton Kernel (#7208) 2024-08-12 22:47:41 +00:00
fp8_e5m2_kvcache.rst [Core/Bugfix] Add FP8 K/V Scale and dtype conversion for prefix/prefill Triton Kernel (#7208) 2024-08-12 22:47:41 +00:00
fp8.rst [Doc] Add docs for llmcompressor INT8 and FP8 checkpoints (#7444) 2024-08-16 13:59:16 -07:00
gguf.rst [Doc] Add documentation for GGUF quantization (#8618) 2024-09-19 13:15:55 -06:00
int8.rst [Doc] Add docs for llmcompressor INT8 and FP8 checkpoints (#7444) 2024-08-16 13:59:16 -07:00
supported_hardware.rst [Misc] Remove SqueezeLLM (#8220) 2024-09-06 16:29:03 -06:00