vllm/tests/quantization
Yan Ma 6b2d25efc7
[Hardware][XPU] AWQ/GPTQ support for xpu backend (#10107)
Signed-off-by: yan ma <yan.ma@intel.com>
2024-11-18 11:18:05 -07:00
..
__init__.py [CI/Build] Move test_utils.py to tests/utils.py (#4425) 2024-05-13 23:50:09 +09:00
test_bitsandbytes.py [Bugfix] bitsandbytes models fail to run pipeline parallel (#10200) 2024-11-13 09:56:39 -07:00
test_compressed_tensors.py [bugfix] Fix static asymmetric quantization case (#10334) 2024-11-15 09:35:11 +08:00
test_configs.py [Model] Add user-configurable task for models that support both generation and embedding (#9424) 2024-10-18 11:31:58 -07:00
test_cpu_offload.py [ci][test] adjust max wait time for cpu offloading test (#7709) 2024-08-20 17:12:44 -07:00
test_experts_int8.py [Kernel] W8A16 Int8 inside FusedMoE (#7415) 2024-08-16 10:06:51 -07:00
test_fp8.py [CI/Build] Avoid CUDA initialization (#8534) 2024-09-18 10:38:11 +00:00
test_ipex_quant.py [Hardware][XPU] AWQ/GPTQ support for xpu backend (#10107) 2024-11-18 11:18:05 -07:00
test_lm_head.py [Core] Support loading GGUF model (#5191) 2024-08-05 17:54:23 -06:00
utils.py [CI/Build] Avoid CUDA initialization (#8534) 2024-09-18 10:38:11 +00:00