vllm/tests/kernels
2024-02-21 20:17:52 -08:00
..
allclose_default.py [ROCm] Fix some kernels failed unit tests (#2498) 2024-02-05 14:25:36 -08:00
conftest.py Support FP8-E5M2 KV Cache (#2279) 2024-01-28 16:43:54 -08:00
test_activation.py Optimize GeGLU layer in Gemma (#2975) 2024-02-21 20:17:52 -08:00
test_attention.py [ROCm] Fix some kernels failed unit tests (#2498) 2024-02-05 14:25:36 -08:00
test_cache.py [Minor] More fix of test_cache.py CI test failure (#2750) 2024-02-06 11:38:38 -08:00
test_layernorm.py Remove hardcoded device="cuda" to support more devices (#2503) 2024-02-01 15:46:39 -08:00
test_moe.py Add fused top-K softmax kernel for MoE (#2769) 2024-02-05 17:38:02 -08:00
test_pos_encoding.py [ROCm] Fix some kernels failed unit tests (#2498) 2024-02-05 14:25:36 -08:00
test_prefix_prefill.py Remove hardcoded device="cuda" to support more devices (#2503) 2024-02-01 15:46:39 -08:00