flash-attention/flash_attn
2023-12-23 17:57:36 -08:00
..
layers Fix typo in RotaryEmbedding forward output type (#666) 2023-11-09 11:43:02 -08:00
losses [CrossEntropy] Implement logit_scale option 2023-12-16 18:39:37 -08:00
models Implement norm head for Baichuan2 2023-12-22 16:55:40 -08:00
modules Add Alibi to MHA, test with Baichuan-13B 2023-12-21 22:49:55 -08:00
ops [LayerNorm] Implement dropout in fused residual + LN/RMSNorm 2023-12-19 16:26:07 -08:00
utils [Gen] Remove minor dead code 2023-12-19 22:57:39 -08:00
__init__.py [CI] Don't compile for python 3.7 pytorch 2.2 2023-12-22 10:10:02 -08:00
bert_padding.py add unpad_input_for_concatenated_sequences (#499) 2023-08-29 02:23:56 -07:00
flash_attn_interface.py Implement deterministic backward (thanks to Meituan) 2023-12-23 17:57:36 -08:00
flash_attn_triton_og.py Run isort and black on python files 2023-08-18 14:22:11 -07:00
flash_attn_triton.py Run isort and black on python files 2023-08-18 14:22:11 -07:00
flash_blocksparse_attention.py Run isort and black on python files 2023-08-18 14:22:11 -07:00
flash_blocksparse_attn_interface.py Run isort and black on python files 2023-08-18 14:22:11 -07:00
fused_softmax.py Run isort and black on python files 2023-08-18 14:22:11 -07:00
pyproject.toml Move pyproject.toml to flash-attn and tests dir to avoid PEP 517 2023-08-25 15:05:28 -07:00