flash-attention/tests/models
2023-01-15 15:20:01 -08:00
..
test_bert.py Tweak CrossEntropyLoss to take process_group in init 2022-12-27 10:47:43 -08:00
test_gpt_generation.py [Gen] Pass qkv_stride to ft_attention kernel for batched generation 2023-01-15 15:20:01 -08:00
test_gpt_parallel.py [TP] Implement TensorParallel without sequence parallel 2023-01-07 13:45:22 -08:00
test_gpt.py Implement generation for GPT 2022-12-27 21:01:50 -08:00