vllm/csrc
2023-04-15 09:03:24 -07:00
..
activation_kernels.cu Optimize data movement (#20) 2023-04-02 00:30:17 -07:00
activation.cpp Optimize data movement (#20) 2023-04-02 00:30:17 -07:00
attention_kernels.cu Support various block sizes & Change default block size to 16 (#38) 2023-04-15 09:03:24 -07:00
attention_utils.h Add custom kernel for RMS normalization (#16) 2023-04-01 00:51:22 +08:00
attention.cpp Support various block sizes & Change default block size to 16 (#38) 2023-04-15 09:03:24 -07:00
cache_kernels.cu Memcpy kernel for flash attention (#29) 2023-04-10 18:22:49 -07:00
cache.cpp Memcpy kernel for flash attention (#29) 2023-04-10 18:22:49 -07:00
cuda_primitives.h Support various block sizes & Change default block size to 16 (#38) 2023-04-15 09:03:24 -07:00
layernorm_kernels.cu Add custom kernel for RMS normalization (#16) 2023-04-01 00:51:22 +08:00
layernorm.cpp Add custom kernel for RMS normalization (#16) 2023-04-01 00:51:22 +08:00
pos_encoding_kernels.cu Optimize data movement (#20) 2023-04-02 00:30:17 -07:00
pos_encoding.cpp Optimize data movement (#20) 2023-04-02 00:30:17 -07:00
reduction_utils.h Add custom kernel for RMS normalization (#16) 2023-04-01 00:51:22 +08:00