vllm/vllm
Jee Jee Li 1700c543a5
[Bugfix] Fix LoRA weight sharding (#10450)
Signed-off-by: Jee Jee Li <pandaleefree@gmail.com>
Co-authored-by: Cyrus Leung <cyrus.tl.leung@gmail.com>
2024-11-23 17:23:17 -08:00
..
adapter_commons [CI/Build] Update Ruff version (#8469) 2024-09-18 11:00:56 +00:00
assets [CI/Build] Update CPU tests to include all "standard" tests (#5481) 2024-11-08 23:30:04 +08:00
attention [Bugfix] Avoid import AttentionMetadata explicitly in Mllama (#10593) 2024-11-23 18:12:20 +00:00
compilation [torch.compile] Inductor code caching fix (#10273) 2024-11-20 21:44:57 -08:00
core Prefix Cache Aware Scheduling [1/n] (#10128) 2024-11-22 21:15:55 -08:00
distributed [Misc] Add pynccl wrappers for all_gather and reduce_scatter (#9432) 2024-11-22 22:14:03 -05:00
engine [Core] remove temporary local variables in LLMEngine.__init__ (#10577) 2024-11-22 16:22:53 -08:00
entrypoints [Bugfix] Internal Server Error when tool_choice is incorrect. (#10567) 2024-11-22 21:13:29 -08:00
executor [Platforms] Refactor openvino code (#10573) 2024-11-22 22:23:12 -08:00
inputs [Misc] Suppress duplicated logging regarding multimodal input pipeline (#10530) 2024-11-21 09:21:31 -08:00
logging_utils Rename vllm.logging to vllm.logging_utils (#10134) 2024-11-08 20:53:24 +00:00
lora [Bugfix] Fix LoRA weight sharding (#10450) 2024-11-23 17:23:17 -08:00
model_executor [Bugfix] Fix LoRA weight sharding (#10450) 2024-11-23 17:23:17 -08:00
multimodal [2/N] handling placeholders in merged multi-modal processor (#10485) 2024-11-22 21:25:09 -08:00
platforms [Bugfix] Avoid import AttentionMetadata explicitly in Mllama (#10593) 2024-11-23 18:12:20 +00:00
plugins [torch.compile] limit inductor threads and lazy import quant (#10482) 2024-11-20 18:36:33 -08:00
profiler [misc] CUDA Time Layerwise Profiler (#8337) 2024-10-17 10:36:09 -04:00
prompt_adapter [CI/Build] drop support for Python 3.8 EOL (#8464) 2024-11-06 07:11:55 +00:00
spec_decode [torch.compile] support all attention backends (#10558) 2024-11-22 14:04:42 -08:00
transformers_utils [Bugfix] Handle conflicts between modern and legacy fields (#10471) 2024-11-20 14:45:08 +08:00
triton_utils [LoRA][Kernel] Remove the unused libentry module (#10214) 2024-11-11 09:43:23 +00:00
usage mypy: check additional directories (#9162) 2024-10-08 22:08:22 +00:00
v1 [Bugfix] Avoid import AttentionMetadata explicitly in Mllama (#10593) 2024-11-23 18:12:20 +00:00
vllm_flash_attn [ci][build] fix vllm-flash-attn (#8699) 2024-09-21 23:24:58 -07:00
worker [Bugfix] multi_modal_kwargs broadcast for CPU tensor parallel (#10541) 2024-11-22 21:25:46 -08:00
__init__.py [Core] renamePromptInputs and inputs (#8876) 2024-09-26 20:35:15 -07:00
_custom_ops.py [AMD] Add support for GGUF quantization on ROCm (#10254) 2024-11-22 21:14:49 -08:00
_ipex_ops.py [Misc][XPU] Upgrade to Pytorch 2.5 for xpu backend (#9823) 2024-11-06 17:29:03 -08:00
beam_search.py [Frontend] re-enable multi-modality input in the new beam search implementation (#9427) 2024-10-29 11:49:47 +00:00
block.py [mypy] Enable mypy type checking for vllm/core (#7229) 2024-08-28 07:11:14 +08:00
config.py [AMD] Add support for GGUF quantization on ROCm (#10254) 2024-11-22 21:14:49 -08:00
connections.py [core][distributed] fix zmq hang (#6759) 2024-07-24 17:37:12 -07:00
envs.py [Misc] Increase default video fetch timeout (#10495) 2024-11-20 23:06:42 -08:00
forward_context.py [torch.compile] support all attention backends (#10558) 2024-11-22 14:04:42 -08:00
logger.py [Core] Fix broken log configuration (#10458) 2024-11-23 10:23:51 +08:00
logits_process.py [Frontend] Bad words sampling parameter (#9717) 2024-10-26 16:29:38 +00:00
outputs.py [VLM] Report multi_modal_placeholders in output (#10407) 2024-11-18 16:06:16 +08:00
pooling_params.py [Frontend] Chat-based Embeddings API (#9759) 2024-11-01 08:13:35 +00:00
py.typed Add py.typed so consumers of vLLM can get type checking (#1509) 2023-10-30 14:50:47 -07:00
sampling_params.py [Bugfix][Frontend] Reject guided decoding in multistep mode (#9892) 2024-11-01 01:09:46 +00:00
scalar_type.py [Bugfix] Fix support for dimension like integers and ScalarType (#9299) 2024-10-17 19:08:34 +00:00
scripts.py [Frontend] Add --version flag to CLI (#10369) 2024-11-15 13:13:53 -08:00
sequence.py Prefix Cache Aware Scheduling [1/n] (#10128) 2024-11-22 21:15:55 -08:00
tracing.py [misc] hide best_of from engine (#9261) 2024-10-10 21:30:44 -07:00
utils.py [2/N] handling placeholders in merged multi-modal processor (#10485) 2024-11-22 21:25:09 -08:00
version.py [CI/Build] use setuptools-scm to set __version__ (#4738) 2024-09-23 09:44:26 -07:00