vllm/vllm
Michael Goin 8f0a9ca890
[Bugfix] Respect modules_to_not_convert within awq_marlin (#9895)
Signed-off-by: mgoin <michael@neuralmagic.com>
2024-11-04 16:57:44 -07:00
..
adapter_commons [CI/Build] Update Ruff version (#8469) 2024-09-18 11:00:56 +00:00
assets [Model][VLM] Add LLaVA-Onevision model support (#8486) 2024-09-22 10:51:44 -07:00
attention [Misc] Compute query_start_loc/seq_start_loc on CPU (#9447) 2024-11-04 08:54:37 +00:00
compilation [torch.compile] use interpreter with stable api from pytorch (#9889) 2024-11-01 11:50:37 -07:00
core [Core][VLM] Add precise multi-modal placeholder tracking (#8346) 2024-11-01 16:21:10 -07:00
distributed [torch.compile] directly register custom op (#9896) 2024-10-31 21:56:09 -07:00
engine [Frontend] Add max_tokens prometheus metric (#9881) 2024-11-04 22:53:24 +00:00
entrypoints [Bugfix] Fix MQLLMEngine hanging (#9973) 2024-11-04 16:01:43 -05:00
executor [2/N] executor pass the complete config to worker/modelrunner (#9938) 2024-11-02 07:35:05 -07:00
inputs [Core][VLM] Add precise multi-modal placeholder tracking (#8346) 2024-11-01 16:21:10 -07:00
logging [MISC] Rework logger to enable pythonic custom logging configuration to be provided (#4273) 2024-05-01 17:34:40 -07:00
lora [Model][LoRA]LoRA support added for Qwen (#9622) 2024-10-29 04:14:07 +00:00
model_executor [Bugfix] Respect modules_to_not_convert within awq_marlin (#9895) 2024-11-04 16:57:44 -07:00
multimodal [Frontend] Multi-Modality Support for Loading Local Image Files (#9915) 2024-11-04 15:34:57 +00:00
platforms [Bugfix][OpenVINO] Fix circular reference #9939 (#9974) 2024-11-04 18:14:13 +08:00
plugins [3/N] model runner pass the whole config to model (#9958) 2024-11-02 12:08:49 -07:00
profiler [misc] CUDA Time Layerwise Profiler (#8337) 2024-10-17 10:36:09 -04:00
prompt_adapter [CI/Build] Avoid CUDA initialization (#8534) 2024-09-18 10:38:11 +00:00
spec_decode [4.5/N] bugfix for quant config in speculative decode (#10007) 2024-11-04 15:11:59 -08:00
transformers_utils [Model] Add support for H2OVL-Mississippi models (#9747) 2024-11-04 00:15:36 +00:00
triton_utils [XPU] avoid triton import for xpu (#9440) 2024-10-24 05:14:00 +00:00
usage mypy: check additional directories (#9162) 2024-10-08 22:08:22 +00:00
v1 [V1] Fix Configs (#9971) 2024-11-04 00:24:40 +00:00
vllm_flash_attn [ci][build] fix vllm-flash-attn (#8699) 2024-09-21 23:24:58 -07:00
worker [3/N] model runner pass the whole config to model (#9958) 2024-11-02 12:08:49 -07:00
__init__.py [Core] renamePromptInputs and inputs (#8876) 2024-09-26 20:35:15 -07:00
_custom_ops.py [Hardware][ROCM] using current_platform.is_rocm (#9642) 2024-10-28 04:07:00 +00:00
_ipex_ops.py [Hardware][intel GPU] bump up ipex version to 2.3 (#8365) 2024-09-13 16:54:34 -07:00
beam_search.py [Frontend] re-enable multi-modality input in the new beam search implementation (#9427) 2024-10-29 11:49:47 +00:00
block.py [mypy] Enable mypy type checking for vllm/core (#7229) 2024-08-28 07:11:14 +08:00
config.py [4/N] make quant config first-class citizen (#9978) 2024-11-04 08:51:31 -08:00
connections.py [core][distributed] fix zmq hang (#6759) 2024-07-24 17:37:12 -07:00
envs.py [torch.compile] rework compile control with piecewise cudagraph (#9715) 2024-10-29 23:03:49 -07:00
forward_context.py [misc] add forward context for attention (#9029) 2024-10-03 12:09:42 -07:00
logger.py [Misc] Add an env var VLLM_LOGGING_PREFIX, if set, it will be prepend to all logging messages (#9590) 2024-10-23 11:17:28 +08:00
logits_process.py [Frontend] Bad words sampling parameter (#9717) 2024-10-26 16:29:38 +00:00
outputs.py [core] move parallel sampling out from vllm core (#9302) 2024-10-22 00:31:44 +00:00
pooling_params.py [Frontend] Chat-based Embeddings API (#9759) 2024-11-01 08:13:35 +00:00
py.typed Add py.typed so consumers of vLLM can get type checking (#1509) 2023-10-30 14:50:47 -07:00
sampling_params.py [Bugfix][Frontend] Reject guided decoding in multistep mode (#9892) 2024-11-01 01:09:46 +00:00
scalar_type.py [Bugfix] Fix support for dimension like integers and ScalarType (#9299) 2024-10-17 19:08:34 +00:00
scripts.py [Frontend] Add Early Validation For Chat Template / Tool Call Parser (#9151) 2024-10-08 14:31:26 +00:00
sequence.py [Bugfix]Using the correct type hints (#9885) 2024-11-04 06:19:51 +00:00
tracing.py [misc] hide best_of from engine (#9261) 2024-10-10 21:30:44 -07:00
utils.py [torch.compile] fix cpu broken code (#9947) 2024-11-01 23:35:47 -07:00
version.py [CI/Build] use setuptools-scm to set __version__ (#4738) 2024-09-23 09:44:26 -07:00