vllm/tests/quantization
2024-10-18 11:31:58 -07:00
..
__init__.py [CI/Build] Move test_utils.py to tests/utils.py (#4425) 2024-05-13 23:50:09 +09:00
test_bitsandbytes.py support bitsandbytes quantization with more models (#9148) 2024-10-08 19:52:19 -06:00
test_compressed_tensors.py [Misc] Directly use compressed-tensors for checkpoint definitions (#8909) 2024-10-15 15:40:25 -07:00
test_configs.py [Model] Add user-configurable task for models that support both generation and embedding (#9424) 2024-10-18 11:31:58 -07:00
test_cpu_offload.py [ci][test] adjust max wait time for cpu offloading test (#7709) 2024-08-20 17:12:44 -07:00
test_experts_int8.py [Kernel] W8A16 Int8 inside FusedMoE (#7415) 2024-08-16 10:06:51 -07:00
test_fp8.py [CI/Build] Avoid CUDA initialization (#8534) 2024-09-18 10:38:11 +00:00
test_ipex_quant.py [Hardware][CPU] Support AWQ for CPU backend (#7515) 2024-10-09 10:28:08 -06:00
test_lm_head.py [Core] Support loading GGUF model (#5191) 2024-08-05 17:54:23 -06:00
utils.py [CI/Build] Avoid CUDA initialization (#8534) 2024-09-18 10:38:11 +00:00