vllm/tests/quantization
Robert Shaw abfe705a02
[ Misc ] Support Fp8 via llm-compressor (#6110)
Co-authored-by: Robert Shaw <rshaw@neuralmagic>
2024-07-07 20:42:11 +00:00
..
__init__.py [CI/Build] Move test_utils.py to tests/utils.py (#4425) 2024-05-13 23:50:09 +09:00
test_bitsandbytes.py [CI/Build][REDO] Add is_quant_method_supported to control quantization test configurations (#5466) 2024-06-13 15:18:08 +00:00
test_compressed_tensors.py [ Misc ] Support Fp8 via llm-compressor (#6110) 2024-07-07 20:42:11 +00:00
test_configs.py [mypy] Enable type checking for test directory (#5017) 2024-06-15 04:45:31 +00:00
test_fp8.py [Kernel] Expand FP8 support to Ampere GPUs using FP8 Marlin (#5975) 2024-07-03 17:38:00 +00:00
test_lm_head.py [CORE] Quantized lm-head Framework (#4442) 2024-07-02 22:25:17 +00:00
utils.py [hardware][misc] introduce platform abstraction (#6080) 2024-07-02 20:12:22 -07:00