flash-attention/flash_attn
Ivan Komarov f692b98d80
Fix spurious re-compilations of rotary_kernel (#911)
All integer parameters are specialized by default, so the two parameters
removed in this commit could lead to kernel re-compilation, even if
they were completely unused.
2024-04-05 13:40:41 -07:00
..
layers Fix typo in RotaryEmbedding forward output type (#666) 2023-11-09 11:43:02 -08:00
losses return z_loss (#768) 2024-01-21 15:23:41 -08:00
models Add window_size option to MHA and GPT 2024-01-31 02:42:23 -08:00
modules fix: cast the alibi slopes to torch.float32 (#846) 2024-03-15 00:49:40 -07:00
ops Fix spurious re-compilations of rotary_kernel (#911) 2024-04-05 13:40:41 -07:00
utils [Gen] Remove minor dead code 2023-12-19 22:57:39 -08:00
__init__.py Bump to v2.5.6 2024-03-01 22:09:56 -08:00
bert_padding.py Updated missing docstrings for args and returns in bert_padding.py (#795) 2024-01-27 09:16:25 -08:00
flash_attn_interface.py Enable paged attention in varlen forward (#831) 2024-03-15 00:48:19 -07:00
flash_attn_triton_og.py Run isort and black on python files 2023-08-18 14:22:11 -07:00
flash_attn_triton.py Run isort and black on python files 2023-08-18 14:22:11 -07:00
flash_blocksparse_attention.py Run isort and black on python files 2023-08-18 14:22:11 -07:00
flash_blocksparse_attn_interface.py Run isort and black on python files 2023-08-18 14:22:11 -07:00
fused_softmax.py Run isort and black on python files 2023-08-18 14:22:11 -07:00
pyproject.toml Move pyproject.toml to flash-attn and tests dir to avoid PEP 517 2023-08-25 15:05:28 -07:00