vllm/tests/kernels
2024-01-30 09:30:50 -08:00
..
conftest.py Support FP8-E5M2 KV Cache (#2279) 2024-01-28 16:43:54 -08:00
test_activation.py [FIX] Support non-zero CUDA devices in custom kernels (#1959) 2024-01-02 19:09:59 -08:00
test_attention.py Support FP8-E5M2 KV Cache (#2279) 2024-01-28 16:43:54 -08:00
test_cache.py Add swap_blocks unit tests (#2616) 2024-01-30 09:30:50 -08:00
test_fused_moe.py DeepseekMoE support with Fused MoE kernel (#2453) 2024-01-29 21:19:48 -08:00
test_layernorm.py [FIX] Support non-zero CUDA devices in custom kernels (#1959) 2024-01-02 19:09:59 -08:00
test_pos_encoding.py [FIX] Support non-zero CUDA devices in custom kernels (#1959) 2024-01-02 19:09:59 -08:00
test_prefix_prefill.py Add a 1-line docstring to explain why calling context_attention_fwd twice in test_prefix_prefill.py (#2553) 2024-01-22 14:47:25 -08:00