vllm/vllm
Travis Johnson 593e79e733
[Bugfix] torch.set_num_threads() in multiproc_gpu_executor (#6802)
[Bugfix] Use torch.set_num_threads() to configure parallelism in multiproc_gpu_executor (#6802)
Signed-off-by: Travis Johnson <tsjohnso@us.ibm.com>
2024-07-26 22:15:20 -07:00
..
adapter_commons [CORE] Adding support for insertion of soft-tuned prompts (#4645) 2024-07-09 13:26:36 -07:00
assets [Misc] Manage HTTP connections in one place (#6600) 2024-07-22 21:32:02 -07:00
attention [Hardware][TPU] Implement tensor parallelism with Ray (#5871) 2024-07-26 20:54:27 -07:00
core [Misc] Small perf improvements (#6520) 2024-07-19 12:10:56 -07:00
distributed [TPU] Support collective communications in XLA devices (#6813) 2024-07-27 01:45:57 +00:00
engine [Hardware][TPU] Implement tensor parallelism with Ray (#5871) 2024-07-26 20:54:27 -07:00
entrypoints [Frontend] Factor out code for running uvicorn (#6828) 2024-07-27 09:58:25 +08:00
executor [Bugfix] torch.set_num_threads() in multiproc_gpu_executor (#6802) 2024-07-26 22:15:20 -07:00
inputs [Frontend] Refactor prompt processing (#4028) 2024-07-22 10:13:53 -07:00
logging [MISC] Rework logger to enable pythonic custom logging configuration to be provided (#4273) 2024-05-01 17:34:40 -07:00
lora [TPU] Support collective communications in XLA devices (#6813) 2024-07-27 01:45:57 +00:00
model_executor [Bugfix][Model] Jamba assertions and no chunked prefill by default for Jamba (#6784) 2024-07-26 20:45:31 -07:00
multimodal [Model] Adding support for MiniCPM-V (#4087) 2024-07-24 20:59:30 -07:00
platforms [Misc] Add a wrapper for torch.inference_mode (#6618) 2024-07-21 18:43:11 -07:00
prompt_adapter [CORE] Adding support for insertion of soft-tuned prompts (#4645) 2024-07-09 13:26:36 -07:00
server [Frontend] Factor out code for running uvicorn (#6828) 2024-07-27 09:58:25 +08:00
spec_decode [Bugfix] Miscalculated latency lead to time_to_first_token_seconds inaccurate. (#6686) 2024-07-24 08:58:42 -07:00
transformers_utils [Model] Support Nemotron models (Nemotron-3, Nemotron-4, Minitron) (#6611) 2024-07-26 14:33:42 -04:00
triton_utils [Bugfix] Add custom Triton cache manager to resolve MoE MP issue (#6140) 2024-07-15 10:12:47 -07:00
usage [Misc] Manage HTTP connections in one place (#6600) 2024-07-22 21:32:02 -07:00
worker [Hardware][TPU] Implement tensor parallelism with Ray (#5871) 2024-07-26 20:54:27 -07:00
__init__.py [Frontend] Refactor prompt processing (#4028) 2024-07-22 10:13:53 -07:00
_custom_ops.py Add fp8 support to reshape_and_cache_flash (#6667) 2024-07-24 18:36:52 +00:00
_ipex_ops.py [Kernel][Attention] Separate Attention.kv_scale into k_scale and v_scale (#6081) 2024-07-16 15:31:32 -07:00
block.py [core][misc] remove logical block (#5882) 2024-06-27 13:34:55 -07:00
config.py enforce eager mode with bnb quantization temporarily (#6846) 2024-07-27 01:32:20 +00:00
connections.py [core][distributed] fix zmq hang (#6759) 2024-07-24 17:37:12 -07:00
envs.py [Hardware] [Intel] Enable Multiprocessing and tensor parallel in CPU backend and update documentation (#6125) 2024-07-26 13:50:10 -07:00
logger.py [Misc] add logging level env var (#5045) 2024-05-24 23:49:49 -07:00
outputs.py [Core] Optimize block_manager_v2 vs block_manager_v1 (to make V2 default) (#5602) 2024-07-01 20:10:37 -07:00
pooling_params.py [Model][Misc] Add e5-mistral-7b-instruct and Embedding API (#3734) 2024-05-11 11:30:37 -07:00
py.typed Add py.typed so consumers of vLLM can get type checking (#1509) 2023-10-30 14:50:47 -07:00
sampling_params.py [Misc] Remove deprecation warning for beam search (#6659) 2024-07-23 00:21:58 +00:00
scripts.py [Frontend] split run_server into build_server and run_server (#6740) 2024-07-24 10:36:04 -07:00
sequence.py [Core] Use array to speedup padding (#6779) 2024-07-25 21:31:31 -07:00
tracing.py [Misc] Add OpenTelemetry support (#4687) 2024-06-19 01:17:03 +09:00
utils.py [Model] H2O Danube3-4b (#6451) 2024-07-26 20:47:50 -07:00
version.py Bump version to 0.5.3.post1 (#6696) 2024-07-23 10:08:59 -07:00