| .. | ||
| cuda_bf16_fallbacks.cuh | ||
| cuda_bf16_wrapper.h | ||
| decoder_masked_multihead_attention_template.hpp | ||
| decoder_masked_multihead_attention_utils.h | ||
| decoder_masked_multihead_attention.cu | ||
| decoder_masked_multihead_attention.h | ||
| ft_attention.cpp | ||
| README.md | ||
| setup.py | ||
Attention kernel from FasterTransformer
This CUDA extension wraps the single-query attention kernel from FasterTransformer v5.2.1 for benchmarking purpose.
cd csrc/ft_attention && pip install .
As of 2023-09-17, this extension is no longer used in the FlashAttention repo.
FlashAttention now has implemented
flash_attn_with_kvcache
with all the features of this ft_attention kernel (and more).