| .. |
|
adapter_commons
|
[CI/Build] Update Ruff version (#8469)
|
2024-09-18 11:00:56 +00:00 |
|
assets
|
[CI/Build] Update CPU tests to include all "standard" tests (#5481)
|
2024-11-08 23:30:04 +08:00 |
|
attention
|
[Platform][Refactor] Extract func get_default_attn_backend to Platform (#10358)
|
2024-11-19 11:22:26 +08:00 |
|
compilation
|
[4/N][torch.compile] clean up set_torch_compile_backend (#10401)
|
2024-11-17 23:57:20 -08:00 |
|
core
|
[Doc] fix doc string typo in block_manager swap_out function (#10212)
|
2024-11-11 08:53:07 -08:00 |
|
distributed
|
[core][distributed] use tcp store directly (#10275)
|
2024-11-12 17:36:08 -08:00 |
|
engine
|
[Bugfix] Enforce no chunked prefill for embedding models (#10470)
|
2024-11-20 05:12:51 +00:00 |
|
entrypoints
|
[BugFix] Fix hermes tool parser output error stream arguments in some cases (#10395) (#10398)
|
2024-11-19 13:42:50 +00:00 |
|
executor
|
[Bugfix] Ignore ray reinit error when current platform is ROCm or XPU (#10375)
|
2024-11-18 11:29:26 +08:00 |
|
inputs
|
[Misc] Fix import error in tensorizer tests and cleanup some code (#10349)
|
2024-11-15 09:34:17 +00:00 |
|
logging_utils
|
Rename vllm.logging to vllm.logging_utils (#10134)
|
2024-11-08 20:53:24 +00:00 |
|
lora
|
[Kernel] Explicitly specify other value in tl.load calls (#9014)
|
2024-11-18 11:39:40 -08:00 |
|
model_executor
|
[Bugfix] Fix Mamba model initialization and MLP Speculator weights loading (#10456)
|
2024-11-20 05:04:05 +00:00 |
|
multimodal
|
[1/N] Initial prototype for multi-modal processor (#10044)
|
2024-11-13 12:39:03 +00:00 |
|
platforms
|
[6/N] torch.compile rollout to users (#10437)
|
2024-11-19 10:09:03 -08:00 |
|
plugins
|
[6/N] torch.compile rollout to users (#10437)
|
2024-11-19 10:09:03 -08:00 |
|
profiler
|
[misc] CUDA Time Layerwise Profiler (#8337)
|
2024-10-17 10:36:09 -04:00 |
|
prompt_adapter
|
[CI/Build] drop support for Python 3.8 EOL (#8464)
|
2024-11-06 07:11:55 +00:00 |
|
spec_decode
|
[Bugfix] Qwen-vl output is inconsistent in speculative decoding (#10350)
|
2024-11-15 05:40:10 +00:00 |
|
transformers_utils
|
[Bugfix] Ensure special tokens are properly filtered out for guided structured output with MistralTokenizer (#10363)
|
2024-11-15 14:50:40 +00:00 |
|
triton_utils
|
[LoRA][Kernel] Remove the unused libentry module (#10214)
|
2024-11-11 09:43:23 +00:00 |
|
usage
|
mypy: check additional directories (#9162)
|
2024-10-08 22:08:22 +00:00 |
|
v1
|
[6/N] torch.compile rollout to users (#10437)
|
2024-11-19 10:09:03 -08:00 |
|
vllm_flash_attn
|
[ci][build] fix vllm-flash-attn (#8699)
|
2024-09-21 23:24:58 -07:00 |
|
worker
|
[Platform][Refactor] Extract func get_default_attn_backend to Platform (#10358)
|
2024-11-19 11:22:26 +08:00 |
|
__init__.py
|
[Core] renamePromptInputs and inputs (#8876)
|
2024-09-26 20:35:15 -07:00 |
|
_custom_ops.py
|
[Model][Quantization] HQQ support through Marlin kernel expansion (#9766)
|
2024-11-19 13:31:12 -08:00 |
|
_ipex_ops.py
|
[Misc][XPU] Upgrade to Pytorch 2.5 for xpu backend (#9823)
|
2024-11-06 17:29:03 -08:00 |
|
beam_search.py
|
[Frontend] re-enable multi-modality input in the new beam search implementation (#9427)
|
2024-10-29 11:49:47 +00:00 |
|
block.py
|
[mypy] Enable mypy type checking for vllm/core (#7229)
|
2024-08-28 07:11:14 +08:00 |
|
config.py
|
[6/N] torch.compile rollout to users (#10437)
|
2024-11-19 10:09:03 -08:00 |
|
connections.py
|
[core][distributed] fix zmq hang (#6759)
|
2024-07-24 17:37:12 -07:00 |
|
envs.py
|
[6/N] torch.compile rollout to users (#10437)
|
2024-11-19 10:09:03 -08:00 |
|
forward_context.py
|
[misc] add forward context for attention (#9029)
|
2024-10-03 12:09:42 -07:00 |
|
logger.py
|
[Misc] small fixes to function tracing file path (#9543)
|
2024-11-10 15:21:06 -08:00 |
|
logits_process.py
|
[Frontend] Bad words sampling parameter (#9717)
|
2024-10-26 16:29:38 +00:00 |
|
outputs.py
|
[VLM] Report multi_modal_placeholders in output (#10407)
|
2024-11-18 16:06:16 +08:00 |
|
pooling_params.py
|
[Frontend] Chat-based Embeddings API (#9759)
|
2024-11-01 08:13:35 +00:00 |
|
py.typed
|
Add py.typed so consumers of vLLM can get type checking (#1509)
|
2023-10-30 14:50:47 -07:00 |
|
sampling_params.py
|
[Bugfix][Frontend] Reject guided decoding in multistep mode (#9892)
|
2024-11-01 01:09:46 +00:00 |
|
scalar_type.py
|
[Bugfix] Fix support for dimension like integers and ScalarType (#9299)
|
2024-10-17 19:08:34 +00:00 |
|
scripts.py
|
[Frontend] Add --version flag to CLI (#10369)
|
2024-11-15 13:13:53 -08:00 |
|
sequence.py
|
[1/N] Initial prototype for multi-modal processor (#10044)
|
2024-11-13 12:39:03 +00:00 |
|
tracing.py
|
[misc] hide best_of from engine (#9261)
|
2024-10-10 21:30:44 -07:00 |
|
utils.py
|
[Misc] Add __setitem__ for LazyDict (#10469)
|
2024-11-20 04:44:57 +00:00 |
|
version.py
|
[CI/Build] use setuptools-scm to set __version__ (#4738)
|
2024-09-23 09:44:26 -07:00 |