flash-attention/flash_attn/ops
Xuechen Li bb4cded17b
support when num_heads is not divisible by world_size; resolves #459 (#461)
* uneql rank.

* trim.

* enable passing in number of heads for each rank.

* simplify.

* simplify.

* cleanup.

* fix col parallel.

* fix bug with row parallel.

* fit out proj.

* refac.

* fix sharding logic.

* refac sharding.

* refac.

* support multiple of.

* make fn reuseable.

* fix bug in dimensions.

* scaffold.

* test uneven heads.

* fix test by adding barrier.

* refac.

* reuse code.

* clean up.
2023-08-18 14:10:35 -07:00
..
triton Implement LLaMa 2023-04-18 21:51:35 -07:00
__init__.py Add __init__.py files to subdirectories for installation 2022-11-17 16:55:44 -08:00
activations.py [FusedDense] Enable sqrelu activation in FusedMLP 2023-04-13 15:29:32 -07:00
fused_dense.py support when num_heads is not divisible by world_size; resolves #459 (#461) 2023-08-18 14:10:35 -07:00
layer_norm.py [LayerNorm] Make sure memory addresses are aligned to 16 bytes 2023-07-04 14:53:12 -07:00
rms_norm.py Implement LLaMa 2023-04-18 21:51:35 -07:00