flash-attention/flash_attn
Antoine Adam 4e38df059e
remove numpy dependency
According to the `setup.py` file, only dependencies are torch and einops. But the `bert_padding.py` file requires `numpy` only to multiply the elements of a `torch.Size` object. This change aims at allowing the use of FlashAttention without numpy.
2022-10-06 19:17:15 +02:00
..
__init__.py Add missing __init__.py 2022-07-03 02:04:55 -04:00
bert_padding.py remove numpy dependency 2022-10-06 19:17:15 +02:00
flash_attention.py Relax assert to allow both bf16 and fp16 2022-09-11 12:09:43 -07:00
flash_attn_interface.py Do P * dP (pointwise) in the bwd in fp32 instead of fp16 2022-07-03 17:52:05 -07:00
flash_blocksparse_attention.py Rename src -> flash_attn 2022-06-01 18:50:26 -07:00
flash_blocksparse_attn_interface.py Rename src -> flash_attn 2022-06-01 18:50:26 -07:00
rotary.py Rename src -> flash_attn 2022-06-01 18:50:26 -07:00