vllm/vllm
2024-09-20 23:28:56 -07:00
..
adapter_commons [CI/Build] Update Ruff version (#8469) 2024-09-18 11:00:56 +00:00
assets [model] Support for Llava-Next-Video model (#7559) 2024-09-10 22:21:36 -07:00
attention [Kernel] Build flash-attn from source (#8245) 2024-09-20 23:27:10 -07:00
compilation [torch.compile] fix functionalization (#8480) 2024-09-14 09:46:04 -07:00
core [CI/Build] Update Ruff version (#8469) 2024-09-18 11:00:56 +00:00
distributed [MISC] add support custom_op check (#8557) 2024-09-20 19:03:55 -07:00
engine [Core] Rename PromptInputs and inputs(#8673) 2024-09-20 19:00:54 -07:00
entrypoints [Core] Rename PromptInputs and inputs(#8673) 2024-09-20 19:00:54 -07:00
executor [Core][Bugfix][Perf] Introduce MQLLMEngine to avoid asyncio OH (#8157) 2024-09-18 13:56:58 +00:00
inputs [VLM] Use SequenceData.from_token_counts to create dummy data (#8687) 2024-09-20 23:28:56 -07:00
logging [MISC] Rework logger to enable pythonic custom logging configuration to be provided (#4273) 2024-05-01 17:34:40 -07:00
lora [Core] Support Lora lineage and base model metadata management (#6315) 2024-09-20 06:20:56 +00:00
model_executor [VLM] Use SequenceData.from_token_counts to create dummy data (#8687) 2024-09-20 23:28:56 -07:00
multimodal [Model][VLM] Add Qwen2-VL model support (#7905) 2024-09-11 09:31:19 -07:00
platforms [CI/Build] Avoid CUDA initialization (#8534) 2024-09-18 10:38:11 +00:00
plugins [plugin][torch.compile] allow to add custom compile backend (#8445) 2024-09-13 09:32:42 -07:00
prompt_adapter [CI/Build] Avoid CUDA initialization (#8534) 2024-09-18 10:38:11 +00:00
spec_decode [CI/Build] Update Ruff version (#8469) 2024-09-18 11:00:56 +00:00
transformers_utils [Bugfix][Core] Fix tekken edge case for mistral tokenizer (#8640) 2024-09-20 14:33:03 -07:00
triton_utils [CI/Build] Update Ruff version (#8469) 2024-09-18 11:00:56 +00:00
usage [CI/Build] Avoid CUDA initialization (#8534) 2024-09-18 10:38:11 +00:00
worker [bugfix] [AMD] add multi-step advance_step to ROCmFlashAttentionMetadata (#8474) 2024-09-19 20:49:54 -07:00
__init__.py [Core] Rename PromptInputs and inputs(#8673) 2024-09-20 19:00:54 -07:00
_core_ext.py [Bugfix] Allow ScalarType to be compiled with pytorch 2.3 and add checks for registering FakeScalarType and dynamo support. (#7886) 2024-08-27 23:13:45 -04:00
_custom_ops.py [Kernel][Amd] Add fp8 kv cache support for rocm custom paged attention (#8577) 2024-09-19 17:37:57 +00:00
_ipex_ops.py [Hardware][intel GPU] bump up ipex version to 2.3 (#8365) 2024-09-13 16:54:34 -07:00
block.py [mypy] Enable mypy type checking for vllm/core (#7229) 2024-08-28 07:11:14 +08:00
config.py [AMD][ROCm]Quantization methods on ROCm; Fix _scaled_mm call (#8380) 2024-09-18 10:41:08 -07:00
connections.py [core][distributed] fix zmq hang (#6759) 2024-07-24 17:37:12 -07:00
envs.py [Core][Bugfix][Perf] Introduce MQLLMEngine to avoid asyncio OH (#8157) 2024-09-18 13:56:58 +00:00
logger.py [Bugfix] Don't disable existing loggers (#7664) 2024-08-19 15:11:58 -07:00
outputs.py [Core] Add engine option to return only deltas or final output (#7381) 2024-09-12 12:02:00 -07:00
pooling_params.py [Core] Optimize SPMD architecture with delta + serialization optimization (#7109) 2024-08-18 17:57:20 -07:00
py.typed Add py.typed so consumers of vLLM can get type checking (#1509) 2023-10-30 14:50:47 -07:00
sampling_params.py [Bugfix] Validate SamplingParam n is an int (#8548) 2024-09-20 12:46:02 -07:00
scalar_type.py [Misc] Disambiguate quantized types via a new ScalarType (#6396) 2024-08-02 13:51:58 -07:00
scripts.py [BugFix] Fix clean shutdown issues (#8492) 2024-09-16 09:33:46 -07:00
sequence.py [VLM] Use SequenceData.from_token_counts to create dummy data (#8687) 2024-09-20 23:28:56 -07:00
tracing.py [CI/Build] Pin OpenTelemetry versions and make errors clearer (#7266) 2024-08-20 10:02:21 -07:00
utils.py [MISC] add support custom_op check (#8557) 2024-09-20 19:03:55 -07:00
version.py bump version to v0.6.1.post2 (#8473) 2024-09-13 11:35:00 -07:00