flash-attention/tests/models
2023-05-30 13:38:34 -07:00
..
test_bert.py [FusedDense] Support relu, rename FusedDenseGeluDense -> FusedMLP 2023-01-17 18:12:27 -08:00
test_gpt_generation_cg.py [Gen] Fix FT kernel smem size, CG when batch size changed 2023-04-20 17:03:13 -07:00
test_gpt_generation_parallel.py Implement GPT-J 2023-03-22 16:16:58 -07:00
test_gpt_generation.py [Gen] Add rotary base as an argument to FT attention kernel 2023-05-30 13:38:34 -07:00
test_gpt_neox.py Implement LLaMa 2023-04-18 21:51:35 -07:00
test_gpt_parallel.py [FusedDense] Support relu, rename FusedDenseGeluDense -> FusedMLP 2023-01-17 18:12:27 -08:00
test_gpt.py Implement GPT-J 2023-03-22 16:16:58 -07:00
test_gptj.py Implement LLaMa 2023-04-18 21:51:35 -07:00
test_llama.py [LLaMa] Fix last norm layer to use RMSNorm instead of LayerNorm 2023-05-04 23:39:43 -07:00
test_opt.py Implement GPT-J 2023-03-22 16:16:58 -07:00
test_vit.py [FusedDense] Support relu, rename FusedDenseGeluDense -> FusedMLP 2023-01-17 18:12:27 -08:00