vllm/vllm
2024-09-18 10:38:11 +00:00
..
adapter_commons [Core] Optimize SPMD architecture with delta + serialization optimization (#7109) 2024-08-18 17:57:20 -07:00
assets [model] Support for Llava-Next-Video model (#7559) 2024-09-10 22:21:36 -07:00
attention [CI/Build] Avoid CUDA initialization (#8534) 2024-09-18 10:38:11 +00:00
compilation [torch.compile] fix functionalization (#8480) 2024-09-14 09:46:04 -07:00
core [Bugfix] Fix async postprocessor in case of preemption (#8267) 2024-09-07 21:01:51 -07:00
distributed [CI/Build] Avoid CUDA initialization (#8534) 2024-09-18 10:38:11 +00:00
engine [Encoder decoder] Add cuda graph support during decoding for encoder-decoder models (#7631) 2024-09-17 07:35:01 -07:00
entrypoints [Misc] Add argument to disable FastAPI docs (#8554) 2024-09-18 09:51:59 +00:00
executor [Misc] Limit to ray[adag] 2.35 to avoid backward incompatible change (#8509) 2024-09-17 00:06:26 -07:00
inputs [Core] Factor out input preprocessing to a separate class (#7329) 2024-09-13 02:56:13 +00:00
logging [MISC] Rework logger to enable pythonic custom logging configuration to be provided (#4273) 2024-05-01 17:34:40 -07:00
lora [Misc] Remove SqueezeLLM (#8220) 2024-09-06 16:29:03 -06:00
model_executor [CI/Build] Avoid CUDA initialization (#8534) 2024-09-18 10:38:11 +00:00
multimodal [Model][VLM] Add Qwen2-VL model support (#7905) 2024-09-11 09:31:19 -07:00
platforms [CI/Build] Avoid CUDA initialization (#8534) 2024-09-18 10:38:11 +00:00
plugins [plugin][torch.compile] allow to add custom compile backend (#8445) 2024-09-13 09:32:42 -07:00
prompt_adapter [CI/Build] Avoid CUDA initialization (#8534) 2024-09-18 10:38:11 +00:00
spec_decode [Spec Decode] Move ops.advance_step to flash attn advance_step (#8224) 2024-09-10 13:18:14 -07:00
transformers_utils [Model] Add mistral function calling format to all models loaded with "mistral" format (#8515) 2024-09-17 17:50:37 +00:00
triton_utils [refactor] remove triton based sampler (#8524) 2024-09-16 20:04:48 -07:00
usage [CI/Build] Avoid CUDA initialization (#8534) 2024-09-18 10:38:11 +00:00
worker [CI/Build] Avoid CUDA initialization (#8534) 2024-09-18 10:38:11 +00:00
__init__.py [Frontend] Refactor prompt processing (#4028) 2024-07-22 10:13:53 -07:00
_core_ext.py [Bugfix] Allow ScalarType to be compiled with pytorch 2.3 and add checks for registering FakeScalarType and dynamo support. (#7886) 2024-08-27 23:13:45 -04:00
_custom_ops.py [Kernel] Change interface to Mamba causal_conv1d_update for continuous batching (#8012) 2024-09-17 23:44:27 +00:00
_ipex_ops.py [Hardware][intel GPU] bump up ipex version to 2.3 (#8365) 2024-09-13 16:54:34 -07:00
block.py [mypy] Enable mypy type checking for vllm/core (#7229) 2024-08-28 07:11:14 +08:00
config.py [CI/Build] Avoid CUDA initialization (#8534) 2024-09-18 10:38:11 +00:00
connections.py [core][distributed] fix zmq hang (#6759) 2024-07-24 17:37:12 -07:00
envs.py [CI/Build] Avoid CUDA initialization (#8534) 2024-09-18 10:38:11 +00:00
logger.py [Bugfix] Don't disable existing loggers (#7664) 2024-08-19 15:11:58 -07:00
outputs.py [Core] Add engine option to return only deltas or final output (#7381) 2024-09-12 12:02:00 -07:00
pooling_params.py [Core] Optimize SPMD architecture with delta + serialization optimization (#7109) 2024-08-18 17:57:20 -07:00
py.typed Add py.typed so consumers of vLLM can get type checking (#1509) 2023-10-30 14:50:47 -07:00
sampling_params.py [Core] Add engine option to return only deltas or final output (#7381) 2024-09-12 12:02:00 -07:00
scalar_type.py [Misc] Disambiguate quantized types via a new ScalarType (#6396) 2024-08-02 13:51:58 -07:00
scripts.py [BugFix] Fix clean shutdown issues (#8492) 2024-09-16 09:33:46 -07:00
sequence.py [HotFix] Fix final output truncation with stop string + streaming (#8468) 2024-09-13 11:31:12 -07:00
tracing.py [CI/Build] Pin OpenTelemetry versions and make errors clearer (#7266) 2024-08-20 10:02:21 -07:00
utils.py [CI/Build] Avoid CUDA initialization (#8534) 2024-09-18 10:38:11 +00:00
version.py bump version to v0.6.1.post2 (#8473) 2024-09-13 11:35:00 -07:00