flash-attention/flash_attn
2023-01-06 17:34:22 -08:00
..
layers [Docs] Mention that XPos's scale_base is recommended to be 512 2022-12-29 20:25:02 -08:00
losses Tweak CrossEntropyLoss to take process_group in init 2022-12-27 10:47:43 -08:00
models [Bert] Fix embedding layer norm before embedding dropout 2023-01-01 10:38:05 -08:00
modules [Gen] Add option to run generation with FT attention kernel 2023-01-03 22:10:31 -08:00
ops [LayerNorm] Implement RMS Norm 2023-01-06 17:34:22 -08:00
utils [Gen] Add option to run generation with FT attention kernel 2023-01-03 22:10:31 -08:00
__init__.py Add missing __init__.py 2022-07-03 02:04:55 -04:00
bert_padding.py remove numpy dependency 2022-10-06 19:17:15 +02:00
flash_attention.py Implement BERT 2022-12-18 21:47:27 -08:00
flash_attn_interface.py Fix the case when dout is not contiguous 2022-12-13 13:58:17 -08:00
flash_attn_triton_og.py Implement FlashAttention in Triton 2022-10-30 18:09:11 -07:00
flash_attn_triton.py [Triton] Avoid einops repeat by using Tensor.expand 2022-12-14 14:48:41 -08:00
flash_blocksparse_attention.py Rename src -> flash_attn 2022-06-01 18:50:26 -07:00
flash_blocksparse_attn_interface.py Rename src -> flash_attn 2022-06-01 18:50:26 -07:00
fused_softmax.py Add Megatron attention implementation for benchmarking 2022-10-23 23:04:16 -07:00