flash-attention/flash_attn
Tri Dao 1aa6d7d9b6 Rework dropout to decouple forward and backward
They don't have to have the same block size, number of threads, etc.
2022-10-21 12:04:27 -07:00
..
__init__.py Add missing __init__.py 2022-07-03 02:04:55 -04:00
bert_padding.py remove numpy dependency 2022-10-06 19:17:15 +02:00
flash_attention.py Relax assert to allow both bf16 and fp16 2022-09-11 12:09:43 -07:00
flash_attn_interface.py Rework dropout to decouple forward and backward 2022-10-21 12:04:27 -07:00
flash_blocksparse_attention.py Rename src -> flash_attn 2022-06-01 18:50:26 -07:00
flash_blocksparse_attn_interface.py Rename src -> flash_attn 2022-06-01 18:50:26 -07:00
rotary.py Rename src -> flash_attn 2022-06-01 18:50:26 -07:00