flash-attention/csrc/flash_attn/src
2022-10-24 17:29:36 -07:00
..
fmha Support all head dims that are multiples of 8, up to 128 2022-10-24 16:04:21 -07:00
.DS_Store Rename, add benchmarking script 2022-05-26 13:57:38 -07:00
fmha_block_dgrad_fp16_kernel_loop.sm80.cu Implement cross attention 2022-07-03 17:48:12 -07:00
fmha_block_dgrad_kernel_1xN_loop.h Support all head dims that are multiples of 8, up to 128 2022-10-24 16:04:21 -07:00
fmha_block_fprop_fp16_kernel.sm80.cu Implement cross attention 2022-07-03 17:48:12 -07:00
fmha_block_fprop_kernel_1xN.h Support all head dims that are multiples of 8, up to 128 2022-10-24 16:04:21 -07:00
fmha_blockmask.h Implement cross attention 2022-07-03 17:48:12 -07:00
fmha_dgrad_fp16_kernel_loop.sm80.cu Support all head dims that are multiples of 8, up to 128 2022-10-24 16:04:21 -07:00
fmha_dgrad_kernel_1xN_loop.h Support all head dims that are multiples of 8, up to 128 2022-10-24 16:04:21 -07:00
fmha_fprop_fp16_kernel.sm80.cu Support all head dims that are multiples of 8, up to 128 2022-10-24 16:04:21 -07:00
fmha_fprop_kernel_1xN.h Get rid of o_rows_are_valid since we don't have headdim=16 anymore 2022-10-24 17:29:36 -07:00
fmha_kernel.h Implement cross attention 2022-07-03 17:48:12 -07:00
fmha_utils.h Implement for bf16 2022-07-09 23:31:56 -07:00
fmha.h Split bwd on the seqlen_q dimension 2022-10-23 11:35:15 -07:00
fp16_switch.h Fixed switch statement, thanks @yocabon 2022-10-04 21:31:39 -04:00
philox.cuh Rework dropout to decouple forward and backward 2022-10-21 12:04:27 -07:00
static_switch.h Implement attention kernel that splits the batch into two 2022-10-13 20:49:02 -07:00