vllm/csrc
2023-05-04 02:56:09 -07:00
..
attention Fix a bug in attention kernel (#68) 2023-05-04 02:56:09 -07:00
activation_kernels.cu Support bfloat16 data type (#54) 2023-05-03 14:09:44 -07:00
activation.cpp Optimize data movement (#20) 2023-04-02 00:30:17 -07:00
attention.cpp Support various block sizes & Change default block size to 16 (#38) 2023-04-15 09:03:24 -07:00
cache_kernels.cu Support bfloat16 data type (#54) 2023-05-03 14:09:44 -07:00
cache.cpp Memcpy kernel for flash attention (#29) 2023-04-10 18:22:49 -07:00
layernorm_kernels.cu Support bfloat16 data type (#54) 2023-05-03 14:09:44 -07:00
layernorm.cpp Add custom kernel for RMS normalization (#16) 2023-04-01 00:51:22 +08:00
pos_encoding_kernels.cu Support bfloat16 data type (#54) 2023-05-03 14:09:44 -07:00
pos_encoding.cpp Add support for GPT-NeoX (Pythia) (#50) 2023-04-28 00:32:10 -07:00
reduction_utils.cuh Refactor attention kernels (#53) 2023-05-03 13:40:13 -07:00