| .. |
|
adapter_commons
|
[CI/Build] Update Ruff version (#8469)
|
2024-09-18 11:00:56 +00:00 |
|
assets
|
[Model][VLM] Add LLaVA-Onevision model support (#8486)
|
2024-09-22 10:51:44 -07:00 |
|
attention
|
[Core] Multi-Step + Single Step Prefills via Chunked Prefill code path (#8378)
|
2024-09-27 13:32:07 -07:00 |
|
compilation
|
[torch.compile] fix functionalization (#8480)
|
2024-09-14 09:46:04 -07:00 |
|
core
|
[Bugfix] Block manager v2 with preemption and lookahead slots (#8824)
|
2024-09-29 09:17:45 +08:00 |
|
distributed
|
[misc][distributed] add VLLM_SKIP_P2P_CHECK flag (#8911)
|
2024-09-27 14:27:56 -07:00 |
|
engine
|
[Bugfix] Fix PP for Multi-Step (#8887)
|
2024-09-28 08:52:46 -07:00 |
|
entrypoints
|
[Frontend] Make beam search emulator temperature modifiable (#8928)
|
2024-09-28 11:30:21 -07:00 |
|
executor
|
[Core] Improve choice of Python multiprocessing method (#8823)
|
2024-09-29 09:17:07 +08:00 |
|
inputs
|
[CI/Build] Update models tests & examples (#8874)
|
2024-09-28 09:54:35 -07:00 |
|
logging
|
[MISC] Rework logger to enable pythonic custom logging configuration to be provided (#4273)
|
2024-05-01 17:34:40 -07:00 |
|
lora
|
[Kernel][LoRA] Add assertion for punica sgmv kernels (#7585)
|
2024-09-23 18:57:42 +00:00 |
|
model_executor
|
[Model] Support Qwen2.5-Math-RM-72B (#8896)
|
2024-09-28 21:19:39 -07:00 |
|
multimodal
|
[Model] Add support for the multi-modal Llama 3.2 model (#8811)
|
2024-09-25 13:29:32 -07:00 |
|
platforms
|
[CI/Build] Add test decorator for minimum GPU memory (#8925)
|
2024-09-29 02:50:51 +00:00 |
|
plugins
|
[plugin][torch.compile] allow to add custom compile backend (#8445)
|
2024-09-13 09:32:42 -07:00 |
|
prompt_adapter
|
[CI/Build] Avoid CUDA initialization (#8534)
|
2024-09-18 10:38:11 +00:00 |
|
spec_decode
|
[Core][Bugfix] Support prompt_logprobs returned with speculative decoding (#8047)
|
2024-09-24 17:29:56 -07:00 |
|
transformers_utils
|
[Bugfix] Fix code for downloading models from modelscope (#8443)
|
2024-09-28 08:24:12 -07:00 |
|
triton_utils
|
[CI/Build] Update Ruff version (#8469)
|
2024-09-18 11:00:56 +00:00 |
|
usage
|
[CI/Build] Avoid CUDA initialization (#8534)
|
2024-09-18 10:38:11 +00:00 |
|
vllm_flash_attn
|
[ci][build] fix vllm-flash-attn (#8699)
|
2024-09-21 23:24:58 -07:00 |
|
worker
|
[Bugfix] Fix PP for Multi-Step (#8887)
|
2024-09-28 08:52:46 -07:00 |
|
__init__.py
|
[Core] renamePromptInputs and inputs (#8876)
|
2024-09-26 20:35:15 -07:00 |
|
_core_ext.py
|
[Bugfix] Allow ScalarType to be compiled with pytorch 2.3 and add checks for registering FakeScalarType and dynamo support. (#7886)
|
2024-08-27 23:13:45 -04:00 |
|
_custom_ops.py
|
[Kernel] Fullgraph and opcheck tests (#8479)
|
2024-09-25 08:35:52 -06:00 |
|
_ipex_ops.py
|
[Hardware][intel GPU] bump up ipex version to 2.3 (#8365)
|
2024-09-13 16:54:34 -07:00 |
|
block.py
|
[mypy] Enable mypy type checking for vllm/core (#7229)
|
2024-08-28 07:11:14 +08:00 |
|
config.py
|
[Core] Multi-Step + Single Step Prefills via Chunked Prefill code path (#8378)
|
2024-09-27 13:32:07 -07:00 |
|
connections.py
|
[core][distributed] fix zmq hang (#6759)
|
2024-07-24 17:37:12 -07:00 |
|
envs.py
|
[misc][distributed] add VLLM_SKIP_P2P_CHECK flag (#8911)
|
2024-09-27 14:27:56 -07:00 |
|
logger.py
|
[Bugfix] Don't disable existing loggers (#7664)
|
2024-08-19 15:11:58 -07:00 |
|
outputs.py
|
Add output streaming support to multi-step + async while ensuring RequestOutput obj reuse (#8335)
|
2024-09-23 15:38:04 -07:00 |
|
pooling_params.py
|
[Core] Optimize SPMD architecture with delta + serialization optimization (#7109)
|
2024-08-18 17:57:20 -07:00 |
|
py.typed
|
Add py.typed so consumers of vLLM can get type checking (#1509)
|
2023-10-30 14:50:47 -07:00 |
|
sampling_params.py
|
[misc] soft drop beam search (#8763)
|
2024-09-24 15:48:39 -07:00 |
|
scalar_type.py
|
[Misc] Disambiguate quantized types via a new ScalarType (#6396)
|
2024-08-02 13:51:58 -07:00 |
|
scripts.py
|
[Core] Improve choice of Python multiprocessing method (#8823)
|
2024-09-29 09:17:07 +08:00 |
|
sequence.py
|
[Core] Multi-Step + Single Step Prefills via Chunked Prefill code path (#8378)
|
2024-09-27 13:32:07 -07:00 |
|
tracing.py
|
[CI/Build] Pin OpenTelemetry versions and make errors clearer (#7266)
|
2024-08-20 10:02:21 -07:00 |
|
utils.py
|
[CI/Build] Add test decorator for minimum GPU memory (#8925)
|
2024-09-29 02:50:51 +00:00 |
|
version.py
|
[CI/Build] use setuptools-scm to set __version__ (#4738)
|
2024-09-23 09:44:26 -07:00 |