flash-attention/tests
2023-05-30 13:38:34 -07:00
..
layers [Rotary] Implement GPT-J style (interleaved) rotary 2023-03-14 14:35:53 -07:00
losses Tweak CrossEntropyLoss to take process_group in init 2022-12-27 10:47:43 -08:00
models [Gen] Add rotary base as an argument to FT attention kernel 2023-05-30 13:38:34 -07:00
modules [FusedDense] Support relu, rename FusedDenseGeluDense -> FusedMLP 2023-01-17 18:12:27 -08:00
ops [LayerNorm] Implement LN with parallel residual, support dim 8k 2023-03-31 14:23:45 -07:00
test_flash_attn.py Skip flash_attn_split test 2022-11-13 12:27:48 -08:00
test_rotary.py Add MLP, MHA, Block, Embedding modules 2022-11-13 22:06:44 -08:00