flash-attention/flash_attn
2023-08-25 15:05:28 -07:00
..
layers Run isort and black on python files 2023-08-18 14:22:11 -07:00
losses Run isort and black on python files 2023-08-18 14:22:11 -07:00
models add llama support to GPTPreTrainedModel.from_pretrained (#479) 2023-08-24 16:31:16 -07:00
modules Run isort and black on python files 2023-08-18 14:22:11 -07:00
ops Run isort and black on python files 2023-08-18 14:22:11 -07:00
utils [GPT] Fix loading weights from HF hub 2023-08-21 22:56:02 -07:00
__init__.py Change causal mask to be aligned to bottom-right instead of top-left 2023-08-24 23:41:07 -07:00
bert_padding.py Run isort and black on python files 2023-08-18 14:22:11 -07:00
flash_attn_interface.py Change causal mask to be aligned to bottom-right instead of top-left 2023-08-24 23:41:07 -07:00
flash_attn_triton_og.py Run isort and black on python files 2023-08-18 14:22:11 -07:00
flash_attn_triton.py Run isort and black on python files 2023-08-18 14:22:11 -07:00
flash_blocksparse_attention.py Run isort and black on python files 2023-08-18 14:22:11 -07:00
flash_blocksparse_attn_interface.py Run isort and black on python files 2023-08-18 14:22:11 -07:00
fused_softmax.py Run isort and black on python files 2023-08-18 14:22:11 -07:00
pyproject.toml Move pyproject.toml to flash-attn and tests dir to avoid PEP 517 2023-08-25 15:05:28 -07:00