vllm/tests/kernels
2024-01-31 14:34:17 -08:00
..
conftest.py Support FP8-E5M2 KV Cache (#2279) 2024-01-28 16:43:54 -08:00
test_activation.py [FIX] Support non-zero CUDA devices in custom kernels (#1959) 2024-01-02 19:09:59 -08:00
test_attention.py Support FP8-E5M2 KV Cache (#2279) 2024-01-28 16:43:54 -08:00
test_cache.py [Minor] Fix test_cache.py CI test failure (#2684) 2024-01-31 10:12:11 -08:00
test_layernorm.py [FIX] Support non-zero CUDA devices in custom kernels (#1959) 2024-01-02 19:09:59 -08:00
test_moe.py Add unit test for Mixtral MoE layer (#2677) 2024-01-31 14:34:17 -08:00
test_pos_encoding.py [FIX] Support non-zero CUDA devices in custom kernels (#1959) 2024-01-02 19:09:59 -08:00
test_prefix_prefill.py Add a 1-line docstring to explain why calling context_attention_fwd twice in test_prefix_prefill.py (#2553) 2024-01-22 14:47:25 -08:00