vllm/tests/quantization
2024-08-29 19:09:08 -04:00
..
__init__.py [CI/Build] Move test_utils.py to tests/utils.py (#4425) 2024-05-13 23:50:09 +09:00
test_bitsandbytes.py support bitsandbytes 8-bit and FP4 quantized models (#7445) 2024-08-29 19:09:08 -04:00
test_compressed_tensors.py [Kernel] Expand MoE weight loading + Add Fused Marlin MoE Kernel (#7766) 2024-08-27 15:07:09 -07:00
test_configs.py [Kernel][Core] Add AWQ support to the Marlin kernel (#6612) 2024-07-21 19:41:42 -04:00
test_cpu_offload.py [ci][test] adjust max wait time for cpu offloading test (#7709) 2024-08-20 17:12:44 -07:00
test_experts_int8.py [Kernel] W8A16 Int8 inside FusedMoE (#7415) 2024-08-16 10:06:51 -07:00
test_fp8.py [Misc/Testing] Use torch.testing.assert_close (#7324) 2024-08-16 04:24:04 +00:00
test_lm_head.py [Core] Support loading GGUF model (#5191) 2024-08-05 17:54:23 -06:00
utils.py [hardware][misc] introduce platform abstraction (#6080) 2024-07-02 20:12:22 -07:00