flash-attention/flash_attn
2024-09-16 14:54:39 -07:00
..
layers [Rotary] Support qkv block layout from GQA 2024-09-11 10:39:58 -07:00
losses [CrossEntropy] Support precomputed LSE 2024-09-08 09:24:43 -07:00
models minor changes to unpad_input test util func 2024-09-16 14:24:11 -07:00
modules Fix: check the type of max_seqlen_k instead of checking max_seqlen twice (#1127) 2024-08-05 08:59:23 -07:00
ops [Rotary] Support qkv block layout from GQA 2024-09-11 10:39:58 -07:00
utils Update citation 2024-05-26 16:09:03 -07:00
__init__.py Bump to v2.6.3 2024-07-25 01:31:28 -07:00
bert_padding.py small fixes 2024-09-16 14:54:39 -07:00
flash_attn_interface.py remove lambda (#1056) 2024-07-21 23:24:38 -07:00
flash_attn_triton_og.py Run isort and black on python files 2023-08-18 14:22:11 -07:00
flash_attn_triton.py Run isort and black on python files 2023-08-18 14:22:11 -07:00
flash_blocksparse_attention.py minor changes to unpad_input test util func 2024-09-16 14:24:11 -07:00
flash_blocksparse_attn_interface.py Run isort and black on python files 2023-08-18 14:22:11 -07:00
fused_softmax.py Run isort and black on python files 2023-08-18 14:22:11 -07:00
pyproject.toml Move pyproject.toml to flash-attn and tests dir to avoid PEP 517 2023-08-25 15:05:28 -07:00