flash-attention/flash_attn
2023-11-27 16:21:28 -08:00
..
layers Fix typo in RotaryEmbedding forward output type (#666) 2023-11-09 11:43:02 -08:00
losses [CE] Implement CrossEntropyLoss in Triton 2023-09-15 20:05:28 -07:00
models Fix E1136 (#563) 2023-09-21 11:48:23 -07:00
modules [Llama] Fix some tests, add tests for Llama 2 and CodeLlama 2023-09-20 23:36:46 -07:00
ops [CrossEntropy] Simplify the case of large vocab with Tensor Parallel 2023-11-19 23:19:36 -08:00
utils [LayerNorm] Implement residual + LayerNorm/RMSNorm in Triton 2023-11-13 02:04:49 -08:00
__init__.py Bump to v2.3.5 2023-11-26 19:08:28 -08:00
bert_padding.py add unpad_input_for_concatenated_sequences (#499) 2023-08-29 02:23:56 -07:00
flash_attn_interface.py [CI] Only compile for CUDA 11.8 & 12.2, MAX_JOBS=2,add torch-nightly 2023-11-27 16:21:28 -08:00
flash_attn_triton_og.py Run isort and black on python files 2023-08-18 14:22:11 -07:00
flash_attn_triton.py Run isort and black on python files 2023-08-18 14:22:11 -07:00
flash_blocksparse_attention.py Run isort and black on python files 2023-08-18 14:22:11 -07:00
flash_blocksparse_attn_interface.py Run isort and black on python files 2023-08-18 14:22:11 -07:00
fused_softmax.py Run isort and black on python files 2023-08-18 14:22:11 -07:00
pyproject.toml Move pyproject.toml to flash-attn and tests dir to avoid PEP 517 2023-08-25 15:05:28 -07:00