flash-attention/flash_attn
2023-08-17 17:45:09 -07:00
..
layers [MHA] Implement MQA/GQA 2023-07-23 00:06:58 -07:00
losses Tweak CrossEntropyLoss to take process_group in init 2022-12-27 10:47:43 -08:00
models [ViT] Run black on vit.py 2023-08-17 17:45:09 -07:00
modules [MHA] Run black on mha.py 2023-08-16 23:47:13 -07:00
ops [FusedDense] Allow Row/ColumnParallelLinear to have uneven split 2023-08-16 23:43:35 -07:00
utils enable loading hf llama checkpoints for training (#446) 2023-08-15 08:33:15 -07:00
__init__.py Fix Bwd NaN for varlen when seqlen_q >> seqlen_k and causal 2023-08-16 15:12:36 -07:00
bert_padding.py remove numpy dependency 2022-10-06 19:17:15 +02:00
flash_attn_interface.py [Docs] Fix docstring about Q nheads being divisible by KV nheads 2023-07-31 17:47:03 -07:00
flash_attn_triton_og.py Implement FlashAttention in Triton 2022-10-30 18:09:11 -07:00
flash_attn_triton.py [Triton] Fix benchmark_causal, mention Triton version 2023-03-22 00:51:16 -07:00
flash_blocksparse_attention.py Rename src -> flash_attn 2022-06-01 18:50:26 -07:00
flash_blocksparse_attn_interface.py Rename src -> flash_attn 2022-06-01 18:50:26 -07:00
fused_softmax.py Add Megatron attention implementation for benchmarking 2022-10-23 23:04:16 -07:00