flash-attention/flash_attn
2024-01-22 22:40:06 -08:00
..
layers Fix typo in RotaryEmbedding forward output type (#666) 2023-11-09 11:43:02 -08:00
losses return z_loss (#768) 2024-01-21 15:23:41 -08:00
models [LayerNorm] Switch from CUDA to Triton implementation 2024-01-05 00:31:17 -08:00
modules [LayerNorm] Switch from CUDA to Triton implementation 2024-01-05 00:31:17 -08:00
ops [LayerNorm] Don't exit early in the backward pass (fix #781) 2024-01-22 22:40:06 -08:00
utils [Gen] Remove minor dead code 2023-12-19 22:57:39 -08:00
__init__.py [CI] Fix CUDA 12.2.2 compilation 2024-01-21 17:23:39 -08:00
bert_padding.py add unpad_input_for_concatenated_sequences (#499) 2023-08-29 02:23:56 -07:00
flash_attn_interface.py Simplify writing softmax to gmem 2024-01-13 00:25:04 -08:00
flash_attn_triton_og.py Run isort and black on python files 2023-08-18 14:22:11 -07:00
flash_attn_triton.py Run isort and black on python files 2023-08-18 14:22:11 -07:00
flash_blocksparse_attention.py Run isort and black on python files 2023-08-18 14:22:11 -07:00
flash_blocksparse_attn_interface.py Run isort and black on python files 2023-08-18 14:22:11 -07:00
fused_softmax.py Run isort and black on python files 2023-08-18 14:22:11 -07:00
pyproject.toml Move pyproject.toml to flash-attn and tests dir to avoid PEP 517 2023-08-25 15:05:28 -07:00