vllm/tests/quantization
dongmao zhang 87525fab92
[bitsandbytes]: support read bnb pre-quantized model (#5753)
Co-authored-by: Michael Goin <michael@neuralmagic.com>
2024-07-23 23:45:09 +00:00
..
__init__.py [CI/Build] Move test_utils.py to tests/utils.py (#4425) 2024-05-13 23:50:09 +09:00
test_bitsandbytes.py [bitsandbytes]: support read bnb pre-quantized model (#5753) 2024-07-23 23:45:09 +00:00
test_compressed_tensors.py [Misc] Support FP8 kv cache scales from compressed-tensors (#6528) 2024-07-23 04:11:50 +00:00
test_configs.py [Kernel][Core] Add AWQ support to the Marlin kernel (#6612) 2024-07-21 19:41:42 -04:00
test_fp8.py [CI] Add smoke test for non-uniform AutoFP8 quantization (#6702) 2024-07-23 22:45:12 +00:00
test_lm_head.py [CORE] Quantized lm-head Framework (#4442) 2024-07-02 22:25:17 +00:00
utils.py [hardware][misc] introduce platform abstraction (#6080) 2024-07-02 20:12:22 -07:00