| .. |
|
adapter_commons
|
[CI/Build] Update Ruff version (#8469)
|
2024-09-18 11:00:56 +00:00 |
|
assets
|
[Model][VLM] Add LLaVA-Onevision model support (#8486)
|
2024-09-22 10:51:44 -07:00 |
|
attention
|
[Kernel] Build flash-attn from source (#8245)
|
2024-09-20 23:27:10 -07:00 |
|
compilation
|
[torch.compile] fix functionalization (#8480)
|
2024-09-14 09:46:04 -07:00 |
|
core
|
[CI/Build] Update Ruff version (#8469)
|
2024-09-18 11:00:56 +00:00 |
|
distributed
|
[MISC] add support custom_op check (#8557)
|
2024-09-20 19:03:55 -07:00 |
|
engine
|
[Core][Frontend] Support Passing Multimodal Processor Kwargs (#8657)
|
2024-09-23 07:44:48 +00:00 |
|
entrypoints
|
[Core][Frontend] Support Passing Multimodal Processor Kwargs (#8657)
|
2024-09-23 07:44:48 +00:00 |
|
executor
|
[Core][Bugfix][Perf] Introduce MQLLMEngine to avoid asyncio OH (#8157)
|
2024-09-18 13:56:58 +00:00 |
|
inputs
|
[Core][Frontend] Support Passing Multimodal Processor Kwargs (#8657)
|
2024-09-23 07:44:48 +00:00 |
|
logging
|
[MISC] Rework logger to enable pythonic custom logging configuration to be provided (#4273)
|
2024-05-01 17:34:40 -07:00 |
|
lora
|
[Core] Support Lora lineage and base model metadata management (#6315)
|
2024-09-20 06:20:56 +00:00 |
|
model_executor
|
[VLM] Fix paligemma, fuyu and persimmon with transformers 4.45 : use config.text_config.vocab_size (#8707)
|
2024-09-23 14:43:09 +00:00 |
|
multimodal
|
[Core][Frontend] Support Passing Multimodal Processor Kwargs (#8657)
|
2024-09-23 07:44:48 +00:00 |
|
platforms
|
[CI/Build] Avoid CUDA initialization (#8534)
|
2024-09-18 10:38:11 +00:00 |
|
plugins
|
[plugin][torch.compile] allow to add custom compile backend (#8445)
|
2024-09-13 09:32:42 -07:00 |
|
prompt_adapter
|
[CI/Build] Avoid CUDA initialization (#8534)
|
2024-09-18 10:38:11 +00:00 |
|
spec_decode
|
[SpecDec][Misc] Cleanup, remove bonus token logic. (#8701)
|
2024-09-22 12:34:14 -07:00 |
|
transformers_utils
|
[Core][Frontend] Support Passing Multimodal Processor Kwargs (#8657)
|
2024-09-23 07:44:48 +00:00 |
|
triton_utils
|
[CI/Build] Update Ruff version (#8469)
|
2024-09-18 11:00:56 +00:00 |
|
usage
|
[CI/Build] Avoid CUDA initialization (#8534)
|
2024-09-18 10:38:11 +00:00 |
|
vllm_flash_attn
|
[ci][build] fix vllm-flash-attn (#8699)
|
2024-09-21 23:24:58 -07:00 |
|
worker
|
[Bugfix][CPU] fix missing input intermediate_tensors in the cpu_model_runner (#8733)
|
2024-09-23 13:15:16 +00:00 |
|
__init__.py
|
[Core] Rename PromptInputs and inputs(#8673)
|
2024-09-20 19:00:54 -07:00 |
|
_core_ext.py
|
[Bugfix] Allow ScalarType to be compiled with pytorch 2.3 and add checks for registering FakeScalarType and dynamo support. (#7886)
|
2024-08-27 23:13:45 -04:00 |
|
_custom_ops.py
|
[Kernel][Amd] Add fp8 kv cache support for rocm custom paged attention (#8577)
|
2024-09-19 17:37:57 +00:00 |
|
_ipex_ops.py
|
[Hardware][intel GPU] bump up ipex version to 2.3 (#8365)
|
2024-09-13 16:54:34 -07:00 |
|
block.py
|
[mypy] Enable mypy type checking for vllm/core (#7229)
|
2024-08-28 07:11:14 +08:00 |
|
config.py
|
[Model] Support pp for qwen2-vl (#8696)
|
2024-09-23 13:46:59 +00:00 |
|
connections.py
|
[core][distributed] fix zmq hang (#6759)
|
2024-07-24 17:37:12 -07:00 |
|
envs.py
|
[Core][Bugfix][Perf] Introduce MQLLMEngine to avoid asyncio OH (#8157)
|
2024-09-18 13:56:58 +00:00 |
|
logger.py
|
[Bugfix] Don't disable existing loggers (#7664)
|
2024-08-19 15:11:58 -07:00 |
|
outputs.py
|
[Core] Add engine option to return only deltas or final output (#7381)
|
2024-09-12 12:02:00 -07:00 |
|
pooling_params.py
|
[Core] Optimize SPMD architecture with delta + serialization optimization (#7109)
|
2024-08-18 17:57:20 -07:00 |
|
py.typed
|
Add py.typed so consumers of vLLM can get type checking (#1509)
|
2023-10-30 14:50:47 -07:00 |
|
sampling_params.py
|
[Bugfix] Validate SamplingParam n is an int (#8548)
|
2024-09-20 12:46:02 -07:00 |
|
scalar_type.py
|
[Misc] Disambiguate quantized types via a new ScalarType (#6396)
|
2024-08-02 13:51:58 -07:00 |
|
scripts.py
|
[BugFix] Fix clean shutdown issues (#8492)
|
2024-09-16 09:33:46 -07:00 |
|
sequence.py
|
[VLM] Use SequenceData.from_token_counts to create dummy data (#8687)
|
2024-09-20 23:28:56 -07:00 |
|
tracing.py
|
[CI/Build] Pin OpenTelemetry versions and make errors clearer (#7266)
|
2024-08-20 10:02:21 -07:00 |
|
utils.py
|
[Core][Frontend] Support Passing Multimodal Processor Kwargs (#8657)
|
2024-09-23 07:44:48 +00:00 |
|
version.py
|
bump version to v0.6.1.post2 (#8473)
|
2024-09-13 11:35:00 -07:00 |