flash-attention/flash_attn
2022-11-23 12:48:56 -08:00
..
layers Add PatchEmbed 2022-11-17 16:56:06 -08:00
losses Bump version to 0.2.1 2022-11-20 22:35:59 -08:00
models [ViT] Use dropout_add_ln for the 1st layer norm 2022-11-23 12:48:56 -08:00
modules [ViT] Use dropout_add_ln for the 1st layer norm 2022-11-23 12:48:56 -08:00
ops Add __init__.py files to subdirectories for installation 2022-11-17 16:55:44 -08:00
utils Add __init__.py files to subdirectories for installation 2022-11-17 16:55:44 -08:00
__init__.py Add missing __init__.py 2022-07-03 02:04:55 -04:00
bert_padding.py remove numpy dependency 2022-10-06 19:17:15 +02:00
flash_attention.py Remove RotaryEmbedding from FlashAttention module 2022-11-10 11:54:36 -08:00
flash_attn_interface.py Parallelize CUDA bwd along seqlen_k instead of seqlen_q 2022-11-05 16:26:17 -07:00
flash_attn_triton_og.py Implement FlashAttention in Triton 2022-10-30 18:09:11 -07:00
flash_attn_triton.py [Triton] Fix variable name from qkv to kv (h/t FrankZijlstra) 2022-11-22 02:07:32 -08:00
flash_blocksparse_attention.py Rename src -> flash_attn 2022-06-01 18:50:26 -07:00
flash_blocksparse_attn_interface.py Rename src -> flash_attn 2022-06-01 18:50:26 -07:00
fused_softmax.py Add Megatron attention implementation for benchmarking 2022-10-23 23:04:16 -07:00