flash-attention/tests
2022-12-27 21:01:50 -08:00
..
losses Tweak CrossEntropyLoss to take process_group in init 2022-12-27 10:47:43 -08:00
models Implement generation for GPT 2022-12-27 21:01:50 -08:00
modules Implement Tensor Parallel for GPT model 2022-12-26 16:22:43 -08:00
ops Implement TensorParallel for FusedDense and FusedDenseGeluDense 2022-12-24 11:48:56 -08:00
test_flash_attn.py Skip flash_attn_split test 2022-11-13 12:27:48 -08:00
test_rotary.py Add MLP, MHA, Block, Embedding modules 2022-11-13 22:06:44 -08:00