vllm/csrc
2023-06-07 00:40:21 -07:00
..
attention Support FP32 (#141) 2023-06-07 00:40:21 -07:00
activation_kernels.cu Support bfloat16 data type (#54) 2023-05-03 14:09:44 -07:00
activation.cpp Optimize data movement (#20) 2023-04-02 00:30:17 -07:00
attention.cpp Support various block sizes & Change default block size to 16 (#38) 2023-04-15 09:03:24 -07:00
cache_kernels.cu Support bfloat16 data type (#54) 2023-05-03 14:09:44 -07:00
cache.cpp Memcpy kernel for flash attention (#29) 2023-04-10 18:22:49 -07:00
layernorm_kernels.cu Support bfloat16 data type (#54) 2023-05-03 14:09:44 -07:00
layernorm.cpp Add custom kernel for RMS normalization (#16) 2023-04-01 00:51:22 +08:00
pos_encoding_kernels.cu Support bfloat16 data type (#54) 2023-05-03 14:09:44 -07:00
pos_encoding.cpp Add support for GPT-NeoX (Pythia) (#50) 2023-04-28 00:32:10 -07:00
reduction_utils.cuh Add copyright headers to source files adapted from FT (#104) 2023-05-14 22:19:19 -07:00