flash-attention/tests
Xuechen Li bb4cded17b
support when num_heads is not divisible by world_size; resolves #459 (#461)
* uneql rank.

* trim.

* enable passing in number of heads for each rank.

* simplify.

* simplify.

* cleanup.

* fix col parallel.

* fix bug with row parallel.

* fit out proj.

* refac.

* fix sharding logic.

* refac sharding.

* refac.

* support multiple of.

* make fn reuseable.

* fix bug in dimensions.

* scaffold.

* test uneven heads.

* fix test by adding barrier.

* refac.

* reuse code.

* clean up.
2023-08-18 14:10:35 -07:00
..
layers [Rotary] Implement GPT-J style (interleaved) rotary 2023-03-14 14:35:53 -07:00
losses Tweak CrossEntropyLoss to take process_group in init 2022-12-27 10:47:43 -08:00
models support when num_heads is not divisible by world_size; resolves #459 (#461) 2023-08-18 14:10:35 -07:00
modules Implement ParallelGatedMlp (#251) 2023-07-26 12:14:15 -07:00
ops [LayerNorm] Add test for randomness 2023-07-23 12:31:55 -10:00
test_flash_attn.py Fix Bwd NaN for varlen when seqlen_q >> seqlen_k and causal 2023-08-16 15:12:36 -07:00
test_rotary.py Add MLP, MHA, Block, Embedding modules 2022-11-13 22:06:44 -08:00