flash-attention/csrc/ft_attention
2023-05-30 13:38:34 -07:00
..
cuda_bf16_fallbacks.cuh [Gen] Add kernel from FasterTransformer for benchmarking 2023-01-03 17:37:43 -08:00
cuda_bf16_wrapper.h [Gen] Add kernel from FasterTransformer for benchmarking 2023-01-03 17:37:43 -08:00
decoder_masked_multihead_attention_template.hpp [Gen] Add rotary base as an argument to FT attention kernel 2023-05-30 13:38:34 -07:00
decoder_masked_multihead_attention_utils.h [Gen] Add rotary base as an argument to FT attention kernel 2023-05-30 13:38:34 -07:00
decoder_masked_multihead_attention.cu [Gen] Fix FT kernel smem size, CG when batch size changed 2023-04-20 17:03:13 -07:00
decoder_masked_multihead_attention.h [Gen] Add rotary base as an argument to FT attention kernel 2023-05-30 13:38:34 -07:00
ft_attention.cpp [Gen] Add rotary base as an argument to FT attention kernel 2023-05-30 13:38:34 -07:00
README.md [Gen] Add kernel from FasterTransformer for benchmarking 2023-01-03 17:37:43 -08:00
setup.py Support H100 for other CUDA extensions 2023-03-15 16:59:27 -07:00

Attention kernel from FasterTransformer

This CUDA extension wraps the single-query attention kernel from FasterTransformer v5.2.1 for benchmarking purpose.

cd csrc/ft_attention && pip install .