flash-attention/flash_attn
2023-09-19 22:20:22 -07:00
..
layers [Gen] Use flash_attn_with_kvcache in generation 2023-09-07 08:24:43 -07:00
losses [CE] Implement CrossEntropyLoss in Triton 2023-09-15 20:05:28 -07:00
models Fix Llama GQA/MQA (#546) 2023-09-19 22:15:59 -07:00
modules [Gen] Rename max_sequence_len->max_seqlen, sequence_len_offset->seqlen_offset 2023-09-19 22:20:22 -07:00
ops [CE] Implement CrossEntropyLoss in Triton 2023-09-15 20:05:28 -07:00
utils [Gen] Rename max_sequence_len->max_seqlen, sequence_len_offset->seqlen_offset 2023-09-19 22:20:22 -07:00
__init__.py Don't compile for Pytorch 2.1 on CUDA 12.1 due to nvcc segfaults 2023-09-17 22:15:38 -07:00
bert_padding.py add unpad_input_for_concatenated_sequences (#499) 2023-08-29 02:23:56 -07:00
flash_attn_interface.py Implement rotary embedding in flash_attn_with_kvcache 2023-09-16 01:20:16 -07:00
flash_attn_triton_og.py Run isort and black on python files 2023-08-18 14:22:11 -07:00
flash_attn_triton.py Run isort and black on python files 2023-08-18 14:22:11 -07:00
flash_blocksparse_attention.py Run isort and black on python files 2023-08-18 14:22:11 -07:00
flash_blocksparse_attn_interface.py Run isort and black on python files 2023-08-18 14:22:11 -07:00
fused_softmax.py Run isort and black on python files 2023-08-18 14:22:11 -07:00
pyproject.toml Move pyproject.toml to flash-attn and tests dir to avoid PEP 517 2023-08-25 15:05:28 -07:00