flash-attention/flash_attn
2024-11-15 16:23:40 -08:00
..
layers [Rotary] Support qkv block layout from GQA 2024-09-11 10:39:58 -07:00
losses [CrossEntropy] Support precomputed LSE 2024-09-08 09:24:43 -07:00
models minify torch.torch.int32 to torch.int32 (#1237) 2024-09-18 00:32:59 -07:00
modules Fix: check the type of max_seqlen_k instead of checking max_seqlen twice (#1127) 2024-08-05 08:59:23 -07:00
ops Fix swiglu backwards return type (#1337) 2024-11-15 16:23:40 -08:00
utils Update citation 2024-05-26 16:09:03 -07:00
__init__.py [CI] Pytorch 2.5.1 does not support python 3.8 2024-11-12 20:02:13 -08:00
bert_padding.py minify torch.torch.int32 to torch.int32 (#1237) 2024-09-18 00:32:59 -07:00
flash_attn_interface.py Add custom ops for compatibility with PT Compile (#1139) 2024-09-17 19:49:26 -07:00
flash_attn_triton_og.py Run isort and black on python files 2023-08-18 14:22:11 -07:00
flash_attn_triton.py Run isort and black on python files 2023-08-18 14:22:11 -07:00
flash_blocksparse_attention.py minor changes to unpad_input test util func 2024-09-16 14:24:11 -07:00
flash_blocksparse_attn_interface.py Run isort and black on python files 2023-08-18 14:22:11 -07:00
fused_softmax.py Run isort and black on python files 2023-08-18 14:22:11 -07:00
pyproject.toml Move pyproject.toml to flash-attn and tests dir to avoid PEP 517 2023-08-25 15:05:28 -07:00