vllm/tests/quantization
2024-08-20 17:12:44 -07:00
..
__init__.py [CI/Build] Move test_utils.py to tests/utils.py (#4425) 2024-05-13 23:50:09 +09:00
test_bitsandbytes.py [bitsandbytes]: support read bnb pre-quantized model (#5753) 2024-07-23 23:45:09 +00:00
test_compressed_tensors.py [Misc] Revert compressed-tensors code reuse (#7521) 2024-08-14 15:07:37 -07:00
test_configs.py [Kernel][Core] Add AWQ support to the Marlin kernel (#6612) 2024-07-21 19:41:42 -04:00
test_cpu_offload.py [ci][test] adjust max wait time for cpu offloading test (#7709) 2024-08-20 17:12:44 -07:00
test_experts_int8.py [Kernel] W8A16 Int8 inside FusedMoE (#7415) 2024-08-16 10:06:51 -07:00
test_fp8.py [Misc/Testing] Use torch.testing.assert_close (#7324) 2024-08-16 04:24:04 +00:00
test_lm_head.py [Core] Support loading GGUF model (#5191) 2024-08-05 17:54:23 -06:00
utils.py [hardware][misc] introduce platform abstraction (#6080) 2024-07-02 20:12:22 -07:00