vllm/vllm
2024-09-19 18:28:25 +00:00
..
adapter_commons [CI/Build] Update Ruff version (#8469) 2024-09-18 11:00:56 +00:00
assets [model] Support for Llava-Next-Video model (#7559) 2024-09-10 22:21:36 -07:00
attention [Kernel][Amd] Add fp8 kv cache support for rocm custom paged attention (#8577) 2024-09-19 17:37:57 +00:00
compilation [torch.compile] fix functionalization (#8480) 2024-09-14 09:46:04 -07:00
core [CI/Build] Update Ruff version (#8469) 2024-09-18 11:00:56 +00:00
distributed [Core] zmq: bind only to 127.0.0.1 for local-only usage (#8543) 2024-09-18 16:10:27 +00:00
engine [Frontend] Use MQLLMEngine for embeddings models too (#8584) 2024-09-19 12:51:06 -04:00
entrypoints [BugFix] Nonzero exit code if MQLLMEngine startup fails (#8572) 2024-09-18 20:17:55 +00:00
executor [Core][Bugfix][Perf] Introduce MQLLMEngine to avoid asyncio OH (#8157) 2024-09-18 13:56:58 +00:00
inputs [Core] Factor out input preprocessing to a separate class (#7329) 2024-09-13 02:56:13 +00:00
logging [MISC] Rework logger to enable pythonic custom logging configuration to be provided (#4273) 2024-05-01 17:34:40 -07:00
lora [Misc] Remove SqueezeLLM (#8220) 2024-09-06 16:29:03 -06:00
model_executor [Core] simplify logits resort in _apply_top_k_top_p (#8619) 2024-09-19 18:28:25 +00:00
multimodal [Model][VLM] Add Qwen2-VL model support (#7905) 2024-09-11 09:31:19 -07:00
platforms [CI/Build] Avoid CUDA initialization (#8534) 2024-09-18 10:38:11 +00:00
plugins [plugin][torch.compile] allow to add custom compile backend (#8445) 2024-09-13 09:32:42 -07:00
prompt_adapter [CI/Build] Avoid CUDA initialization (#8534) 2024-09-18 10:38:11 +00:00
spec_decode [CI/Build] Update Ruff version (#8469) 2024-09-18 11:00:56 +00:00
transformers_utils [Model] Support Solar Model (#8386) 2024-09-18 11:04:00 -06:00
triton_utils [CI/Build] Update Ruff version (#8469) 2024-09-18 11:00:56 +00:00
usage [CI/Build] Avoid CUDA initialization (#8534) 2024-09-18 10:38:11 +00:00
worker [Bugfix] [Encoder-Decoder] Bugfix for encoder specific metadata construction during decode of encoder-decoder models. (#8545) 2024-09-19 02:24:15 +00:00
__init__.py [Frontend] Refactor prompt processing (#4028) 2024-07-22 10:13:53 -07:00
_core_ext.py [Bugfix] Allow ScalarType to be compiled with pytorch 2.3 and add checks for registering FakeScalarType and dynamo support. (#7886) 2024-08-27 23:13:45 -04:00
_custom_ops.py [Kernel][Amd] Add fp8 kv cache support for rocm custom paged attention (#8577) 2024-09-19 17:37:57 +00:00
_ipex_ops.py [Hardware][intel GPU] bump up ipex version to 2.3 (#8365) 2024-09-13 16:54:34 -07:00
block.py [mypy] Enable mypy type checking for vllm/core (#7229) 2024-08-28 07:11:14 +08:00
config.py [AMD][ROCm]Quantization methods on ROCm; Fix _scaled_mm call (#8380) 2024-09-18 10:41:08 -07:00
connections.py [core][distributed] fix zmq hang (#6759) 2024-07-24 17:37:12 -07:00
envs.py [Core][Bugfix][Perf] Introduce MQLLMEngine to avoid asyncio OH (#8157) 2024-09-18 13:56:58 +00:00
logger.py [Bugfix] Don't disable existing loggers (#7664) 2024-08-19 15:11:58 -07:00
outputs.py [Core] Add engine option to return only deltas or final output (#7381) 2024-09-12 12:02:00 -07:00
pooling_params.py [Core] Optimize SPMD architecture with delta + serialization optimization (#7109) 2024-08-18 17:57:20 -07:00
py.typed Add py.typed so consumers of vLLM can get type checking (#1509) 2023-10-30 14:50:47 -07:00
sampling_params.py [Core] Add engine option to return only deltas or final output (#7381) 2024-09-12 12:02:00 -07:00
scalar_type.py [Misc] Disambiguate quantized types via a new ScalarType (#6396) 2024-08-02 13:51:58 -07:00
scripts.py [BugFix] Fix clean shutdown issues (#8492) 2024-09-16 09:33:46 -07:00
sequence.py [HotFix] Fix final output truncation with stop string + streaming (#8468) 2024-09-13 11:31:12 -07:00
tracing.py [CI/Build] Pin OpenTelemetry versions and make errors clearer (#7266) 2024-08-20 10:02:21 -07:00
utils.py [CI/Build] Avoid CUDA initialization (#8534) 2024-09-18 10:38:11 +00:00
version.py bump version to v0.6.1.post2 (#8473) 2024-09-13 11:35:00 -07:00