vllm/vllm
2024-10-01 09:58:06 +00:00
..
adapter_commons [CI/Build] Update Ruff version (#8469) 2024-09-18 11:00:56 +00:00
assets [Model][VLM] Add LLaVA-Onevision model support (#8486) 2024-09-22 10:51:44 -07:00
attention [Core] Multi-Step + Single Step Prefills via Chunked Prefill code path (#8378) 2024-09-27 13:32:07 -07:00
compilation [torch.compile] fix functionalization (#8480) 2024-09-14 09:46:04 -07:00
core [Misc] Fix typo in BlockSpaceManagerV1 (#8944) 2024-09-29 15:05:54 +00:00
distributed [misc][distributed] add VLLM_SKIP_P2P_CHECK flag (#8911) 2024-09-27 14:27:56 -07:00
engine [Core] [Frontend] Priority scheduling for embeddings and in the OpenAI-API (#8965) 2024-10-01 09:58:06 +00:00
entrypoints [Core] [Frontend] Priority scheduling for embeddings and in the OpenAI-API (#8965) 2024-10-01 09:58:06 +00:00
executor [Core] Improve choice of Python multiprocessing method (#8823) 2024-09-29 09:17:07 +08:00
inputs [CI/Build] Update models tests & examples (#8874) 2024-09-28 09:54:35 -07:00
logging [MISC] Rework logger to enable pythonic custom logging configuration to be provided (#4273) 2024-05-01 17:34:40 -07:00
lora [Model][LoRA]LoRA support added for MiniCPMV2.5 (#7199) 2024-09-29 06:59:45 +00:00
model_executor [Bugfix] Fix Token IDs Reference for MiniCPM-V When Images are Provided With No Placeholders (#8991) 2024-10-01 09:52:44 +00:00
multimodal [Model] Add support for the multi-modal Llama 3.2 model (#8811) 2024-09-25 13:29:32 -07:00
platforms [CI/Build] Add test decorator for minimum GPU memory (#8925) 2024-09-29 02:50:51 +00:00
plugins [plugin][torch.compile] allow to add custom compile backend (#8445) 2024-09-13 09:32:42 -07:00
prompt_adapter [CI/Build] Avoid CUDA initialization (#8534) 2024-09-18 10:38:11 +00:00
spec_decode [Core][Bugfix] Support prompt_logprobs returned with speculative decoding (#8047) 2024-09-24 17:29:56 -07:00
transformers_utils [Bugfix] Fix code for downloading models from modelscope (#8443) 2024-09-28 08:24:12 -07:00
triton_utils [CI/Build] Update Ruff version (#8469) 2024-09-18 11:00:56 +00:00
usage [CI/Build] Avoid CUDA initialization (#8534) 2024-09-18 10:38:11 +00:00
vllm_flash_attn [ci][build] fix vllm-flash-attn (#8699) 2024-09-21 23:24:58 -07:00
worker [torch.compile] fix tensor alias (#8982) 2024-10-01 03:40:48 +00:00
__init__.py [Core] renamePromptInputs and inputs (#8876) 2024-09-26 20:35:15 -07:00
_core_ext.py [Bugfix] Allow ScalarType to be compiled with pytorch 2.3 and add checks for registering FakeScalarType and dynamo support. (#7886) 2024-08-27 23:13:45 -04:00
_custom_ops.py [Kernel][Model] Varlen prefill + Prefill chunking support for mamba kernels and Jamba model (#8533) 2024-09-29 17:35:58 -04:00
_ipex_ops.py [Hardware][intel GPU] bump up ipex version to 2.3 (#8365) 2024-09-13 16:54:34 -07:00
block.py [mypy] Enable mypy type checking for vllm/core (#7229) 2024-08-28 07:11:14 +08:00
config.py [Core] Multi-Step + Single Step Prefills via Chunked Prefill code path (#8378) 2024-09-27 13:32:07 -07:00
connections.py [core][distributed] fix zmq hang (#6759) 2024-07-24 17:37:12 -07:00
envs.py [misc][distributed] add VLLM_SKIP_P2P_CHECK flag (#8911) 2024-09-27 14:27:56 -07:00
logger.py [Bugfix] Don't disable existing loggers (#7664) 2024-08-19 15:11:58 -07:00
outputs.py Add output streaming support to multi-step + async while ensuring RequestOutput obj reuse (#8335) 2024-09-23 15:38:04 -07:00
pooling_params.py [Core] Optimize SPMD architecture with delta + serialization optimization (#7109) 2024-08-18 17:57:20 -07:00
py.typed Add py.typed so consumers of vLLM can get type checking (#1509) 2023-10-30 14:50:47 -07:00
sampling_params.py [Frontend][Core] Move guided decoding params into sampling params (#8252) 2024-10-01 09:34:25 +08:00
scalar_type.py [Misc] Disambiguate quantized types via a new ScalarType (#6396) 2024-08-02 13:51:58 -07:00
scripts.py [Core] Improve choice of Python multiprocessing method (#8823) 2024-09-29 09:17:07 +08:00
sequence.py [Core] Multi-Step + Single Step Prefills via Chunked Prefill code path (#8378) 2024-09-27 13:32:07 -07:00
tracing.py [CI/Build] Pin OpenTelemetry versions and make errors clearer (#7266) 2024-08-20 10:02:21 -07:00
utils.py [CI/Build] Add test decorator for minimum GPU memory (#8925) 2024-09-29 02:50:51 +00:00
version.py [CI/Build] use setuptools-scm to set __version__ (#4738) 2024-09-23 09:44:26 -07:00