vllm/vllm
Travis Johnson 3f8d42c81f
Pipeline Parallel: Guard for KeyErrors at request abort (#6587)
Signed-off-by: Travis Johnson <tsjohnso@us.ibm.com>
2024-07-19 19:18:19 -07:00
..
adapter_commons [CORE] Adding support for insertion of soft-tuned prompts (#4645) 2024-07-09 13:26:36 -07:00
assets [CI/Build] Remove "boardwalk" image asset (#6460) 2024-07-16 08:59:36 -07:00
attention [Bugfix] Update flashinfer.py with PagedAttention forwards - Fixes Gemma2 OpenAI Server Crash (#6501) 2024-07-18 07:47:13 +00:00
core [Misc] Small perf improvements (#6520) 2024-07-19 12:10:56 -07:00
distributed [bugfix][distributed] fix multi-node bug for shared memory (#6597) 2024-07-19 15:34:34 -07:00
engine Pipeline Parallel: Guard for KeyErrors at request abort (#6587) 2024-07-19 19:18:19 -07:00
entrypoints [Bugfix][Frontend] remove duplicate init logger (#6581) 2024-07-19 10:16:27 -07:00
executor [Core] Allow specifying custom Executor (#6557) 2024-07-20 01:25:06 +00:00
inputs [Doc] Move guide for multimodal model and other improvements (#6168) 2024-07-06 17:18:59 +08:00
logging [MISC] Rework logger to enable pythonic custom logging configuration to be provided (#4273) 2024-05-01 17:34:40 -07:00
lora [CORE] Adding support for insertion of soft-tuned prompts (#4645) 2024-07-09 13:26:36 -07:00
model_executor [ Kernel ] Enable Dynamic Per Token fp8 (#6547) 2024-07-19 23:08:15 +00:00
multimodal [Bugfix] Convert image to RGB by default (#6430) 2024-07-15 05:39:15 +00:00
platforms [CI/Build] Enable mypy typing for remaining folders (#6268) 2024-07-10 22:15:55 +08:00
prompt_adapter [CORE] Adding support for insertion of soft-tuned prompts (#4645) 2024-07-09 13:26:36 -07:00
spec_decode [Bugfix] [SpecDecode] AsyncMetricsCollector: update time since last collection (#6578) 2024-07-19 14:01:03 -07:00
transformers_utils [Core] Allow specifying custom Executor (#6557) 2024-07-20 01:25:06 +00:00
triton_utils [Bugfix] Add custom Triton cache manager to resolve MoE MP issue (#6140) 2024-07-15 10:12:47 -07:00
usage [CI/Build] vLLM cache directory for images (#6444) 2024-07-15 23:12:25 -07:00
worker [Core] Allow specifying custom Executor (#6557) 2024-07-20 01:25:06 +00:00
__init__.py [Misc] Add generated git commit hash as vllm.__commit__ (#6386) 2024-07-12 22:52:15 +00:00
_custom_ops.py [ Kernel ] FP8 Dynamic Per Token Quant - Add scale_ub (#6593) 2024-07-19 18:15:26 -07:00
_ipex_ops.py [Kernel][Attention] Separate Attention.kv_scale into k_scale and v_scale (#6081) 2024-07-16 15:31:32 -07:00
block.py [core][misc] remove logical block (#5882) 2024-06-27 13:34:55 -07:00
config.py [Core] Allow specifying custom Executor (#6557) 2024-07-20 01:25:06 +00:00
envs.py [Core] Introduce SPMD worker execution using Ray accelerated DAG (#6032) 2024-07-17 22:27:09 -07:00
logger.py [Misc] add logging level env var (#5045) 2024-05-24 23:49:49 -07:00
outputs.py [Core] Optimize block_manager_v2 vs block_manager_v1 (to make V2 default) (#5602) 2024-07-01 20:10:37 -07:00
pooling_params.py [Model][Misc] Add e5-mistral-7b-instruct and Embedding API (#3734) 2024-05-11 11:30:37 -07:00
py.typed Add py.typed so consumers of vLLM can get type checking (#1509) 2023-10-30 14:50:47 -07:00
sampling_params.py Report usage for beam search (#6404) 2024-07-14 19:37:35 -07:00
scripts.py [Feature] vLLM CLI (#5090) 2024-07-14 15:36:43 -07:00
sequence.py [Misc] Small perf improvements (#6520) 2024-07-19 12:10:56 -07:00
tracing.py [Misc] Add OpenTelemetry support (#4687) 2024-06-19 01:17:03 +09:00
utils.py [Misc] Small perf improvements (#6520) 2024-07-19 12:10:56 -07:00
version.py bump version to v0.5.2 (#6433) 2024-07-15 17:27:40 +00:00