flash-attention/tests/models
2023-09-18 15:29:06 -07:00
..
test_baichuan.py [Gen] Don't use ft_attention, use flash_attn_with_kvcache instead 2023-09-18 15:29:06 -07:00
test_bert.py Add BigCode converters (#532) 2023-09-10 17:24:50 -07:00
test_bigcode.py [Gen] Don't use ft_attention, use flash_attn_with_kvcache instead 2023-09-18 15:29:06 -07:00
test_falcon.py [Gen] Don't use ft_attention, use flash_attn_with_kvcache instead 2023-09-18 15:29:06 -07:00
test_gpt_generation_parallel.py [Gen] Don't use ft_attention, use flash_attn_with_kvcache instead 2023-09-18 15:29:06 -07:00
test_gpt_neox.py Add tests for Pythia, GPT-JT, and RedPajama models 2023-09-13 01:10:39 -07:00
test_gpt_parallel.py Run isort and black on test files 2023-08-18 20:59:35 -07:00
test_gpt.py [Gen] Don't use ft_attention, use flash_attn_with_kvcache instead 2023-09-18 15:29:06 -07:00
test_gptj.py [Gen] Don't use ft_attention, use flash_attn_with_kvcache instead 2023-09-18 15:29:06 -07:00
test_llama.py [Gen] Don't use ft_attention, use flash_attn_with_kvcache instead 2023-09-18 15:29:06 -07:00
test_opt.py [Gen] Don't use ft_attention, use flash_attn_with_kvcache instead 2023-09-18 15:29:06 -07:00
test_vit.py Run isort and black on test files 2023-08-18 20:59:35 -07:00