flash-attention/csrc/ft_attention
2023-01-03 17:37:43 -08:00
..
cuda_bf16_fallbacks.cuh [Gen] Add kernel from FasterTransformer for benchmarking 2023-01-03 17:37:43 -08:00
cuda_bf16_wrapper.h [Gen] Add kernel from FasterTransformer for benchmarking 2023-01-03 17:37:43 -08:00
decoder_masked_multihead_attention_template.hpp [Gen] Add kernel from FasterTransformer for benchmarking 2023-01-03 17:37:43 -08:00
decoder_masked_multihead_attention_utils.h [Gen] Add kernel from FasterTransformer for benchmarking 2023-01-03 17:37:43 -08:00
decoder_masked_multihead_attention.cu [Gen] Add kernel from FasterTransformer for benchmarking 2023-01-03 17:37:43 -08:00
decoder_masked_multihead_attention.h [Gen] Add kernel from FasterTransformer for benchmarking 2023-01-03 17:37:43 -08:00
ft_attention.cpp [Gen] Add kernel from FasterTransformer for benchmarking 2023-01-03 17:37:43 -08:00
README.md [Gen] Add kernel from FasterTransformer for benchmarking 2023-01-03 17:37:43 -08:00
setup.py [Gen] Add kernel from FasterTransformer for benchmarking 2023-01-03 17:37:43 -08:00

Attention kernel from FasterTransformer

This CUDA extension wraps the single-query attention kernel from FasterTransformer v5.2.1 for benchmarking purpose.

cd csrc/ft_attention && pip install .