vllm/tests/kernels
zhaoyang-star 9090bf02e7
Support FP8-E5M2 KV Cache (#2279)
Co-authored-by: zhaoyang <zhao.yang16@zte.com.cn>
Co-authored-by: Zhuohan Li <zhuohan123@gmail.com>
2024-01-28 16:43:54 -08:00
..
conftest.py Support FP8-E5M2 KV Cache (#2279) 2024-01-28 16:43:54 -08:00
test_activation.py [FIX] Support non-zero CUDA devices in custom kernels (#1959) 2024-01-02 19:09:59 -08:00
test_attention.py Support FP8-E5M2 KV Cache (#2279) 2024-01-28 16:43:54 -08:00
test_cache.py Support FP8-E5M2 KV Cache (#2279) 2024-01-28 16:43:54 -08:00
test_layernorm.py [FIX] Support non-zero CUDA devices in custom kernels (#1959) 2024-01-02 19:09:59 -08:00
test_pos_encoding.py [FIX] Support non-zero CUDA devices in custom kernels (#1959) 2024-01-02 19:09:59 -08:00
test_prefix_prefill.py Add a 1-line docstring to explain why calling context_attention_fwd twice in test_prefix_prefill.py (#2553) 2024-01-22 14:47:25 -08:00