vllm/vllm/distributed
2024-07-03 16:40:31 -07:00
..
device_communicators [misc][cuda] use nvml to avoid accidentally cuda initialization (#6007) 2024-06-30 20:07:34 -07:00
__init__.py [Core][Refactor] move parallel_utils into vllm/distributed (#3950) 2024-04-10 15:33:30 -07:00
communication_op.py [Core][Distributed] code deduplication in tp&pp with coordinator(#5293) 2024-06-12 17:27:08 -07:00
parallel_state.py [core][distributed] custom allreduce when pp size > 1 (#6117) 2024-07-03 14:41:32 -07:00
utils.py [core][distributed] support n layers % pp size != 0 (#6115) 2024-07-03 16:40:31 -07:00