flash-attention/flash_attn
2023-09-10 17:24:50 -07:00
..
layers [Gen] Use flash_attn_with_kvcache in generation 2023-09-07 08:24:43 -07:00
losses Run isort and black on python files 2023-08-18 14:22:11 -07:00
models Add BigCode converters (#532) 2023-09-10 17:24:50 -07:00
modules [Gen] Fix calling update_graph_cache in tests 2023-09-10 17:22:37 -07:00
ops [Gen] Fix calling update_graph_cache in tests 2023-09-10 17:22:37 -07:00
utils [Gen] Use flash_attn_with_kvcache in generation 2023-09-07 08:24:43 -07:00
__init__.py Bump to v2.2.1 2023-09-06 02:19:55 -07:00
bert_padding.py add unpad_input_for_concatenated_sequences (#499) 2023-08-29 02:23:56 -07:00
flash_attn_interface.py Support cache_seqlens being integer 2023-09-05 11:27:48 -07:00
flash_attn_triton_og.py Run isort and black on python files 2023-08-18 14:22:11 -07:00
flash_attn_triton.py Run isort and black on python files 2023-08-18 14:22:11 -07:00
flash_blocksparse_attention.py Run isort and black on python files 2023-08-18 14:22:11 -07:00
flash_blocksparse_attn_interface.py Run isort and black on python files 2023-08-18 14:22:11 -07:00
fused_softmax.py Run isort and black on python files 2023-08-18 14:22:11 -07:00
pyproject.toml Move pyproject.toml to flash-attn and tests dir to avoid PEP 517 2023-08-25 15:05:28 -07:00