flash-attention/flash_attn/utils
Xuechen Li bb4cded17b
support when num_heads is not divisible by world_size; resolves #459 (#461)
* uneql rank.

* trim.

* enable passing in number of heads for each rank.

* simplify.

* simplify.

* cleanup.

* fix col parallel.

* fix bug with row parallel.

* fit out proj.

* refac.

* fix sharding logic.

* refac sharding.

* refac.

* support multiple of.

* make fn reuseable.

* fix bug in dimensions.

* scaffold.

* test uneven heads.

* fix test by adding barrier.

* refac.

* reuse code.

* clean up.
2023-08-18 14:10:35 -07:00
..
__init__.py Add __init__.py files to subdirectories for installation 2022-11-17 16:55:44 -08:00
benchmark.py [Benchmark] Add script to benchmark FlashAttention 2023-07-28 00:26:52 -10:00
distributed.py support when num_heads is not divisible by world_size; resolves #459 (#461) 2023-08-18 14:10:35 -07:00
generation.py [Gen] Minor tweak to allocate_inference_cache 2023-04-21 11:56:47 -07:00
pretrained.py enable loading hf llama checkpoints for training (#446) 2023-08-15 08:33:15 -07:00