..
async_engine
[MISC] Consolidate cleanup() and refactor offline_inference_with_prefix.py ( #9510 )
2024-10-18 14:30:55 -07:00
basic_correctness
[Bugfix] Fix pickle of input when async output processing is on ( #9931 )
2024-11-06 00:39:26 +00:00
compile
[v1][torch.compile] support managing cudagraph buffer ( #10203 )
2024-11-11 11:10:27 -08:00
core
[CI/Build] drop support for Python 3.8 EOL ( #8464 )
2024-11-06 07:11:55 +00:00
data
[Bugfix] Fix load config when using bools ( #9533 )
2024-10-27 13:46:41 -04:00
distributed
[misc][distributed] auto port selection and disable tests ( #10226 )
2024-11-11 11:54:59 -08:00
encoder_decoder
[Encoder Decoder] Add flash_attn kernel support for encoder-decoder models ( #9559 )
2024-11-01 23:22:49 -07:00
engine
Adds method to read the pooling types from model's files ( #9506 )
2024-11-07 08:42:40 +00:00
entrypoints
[V1] AsyncLLM Implementation ( #9826 )
2024-11-11 23:05:38 +00:00
fp8_kv
Enable scaled FP8 (e4m3fn) KV cache on ROCm (AMD GPU) ( #3290 )
2024-04-03 14:15:55 -07:00
kernels
[Kernel][Triton] Add Triton implementation for scaled_mm_triton to support fp8 and int8 SmoothQuant, symmetric case ( #9857 )
2024-11-08 19:59:22 -05:00
lora
[LoRA][Kernel] Remove the unused libentry module ( #10214 )
2024-11-11 09:43:23 +00:00
metrics
[Frontend] Add max_tokens prometheus metric ( #9881 )
2024-11-04 22:53:24 +00:00
model_executor
Adds method to read the pooling types from model's files ( #9506 )
2024-11-07 08:42:40 +00:00
models
[Hardware][CPU] Add embedding models support for CPU backend ( #10193 )
2024-11-11 08:54:28 +00:00
mq_llm_engine
[Bugfix][core] replace heartbeat with pid check ( #9818 )
2024-10-30 09:34:07 -07:00
multi_step
[Core] Deprecating block manager v1 and make block manager v2 default ( #8704 )
2024-10-17 11:38:15 -05:00
multimodal
[0/N] Rename MultiModalInputs to MultiModalKwargs ( #10040 )
2024-11-09 11:31:02 +08:00
plugins /vllm_add_dummy_model
[Model] VLM2Vec, the first multimodal embedding model in vLLM ( #9303 )
2024-10-16 14:31:00 +08:00
prefix_caching
[Bugfix] Fix illegal memory access error with chunked prefill, prefix caching, block manager v2 and xformers enabled together ( #9532 )
2024-10-31 11:46:36 -07:00
prompt_adapter
[CORE] Adding support for insertion of soft-tuned prompts ( #4645 )
2024-07-09 13:26:36 -07:00
prompts
[BugFix] Fix input positions for long context with sliding window ( #2088 )
2023-12-13 12:28:13 -08:00
quantization
🐛 fix torch memory profiling ( #9516 )
2024-10-18 21:25:19 -04:00
samplers
[CI/Build] drop support for Python 3.8 EOL ( #8464 )
2024-11-06 07:11:55 +00:00
spec_decode
Disable spec-decode + chunked-prefill for draft models with tensor parallelism > 1 ( #10136 )
2024-11-08 15:56:18 +00:00
tensorizer_loader
[MISC] Consolidate cleanup() and refactor offline_inference_with_prefix.py ( #9510 )
2024-10-18 14:30:55 -07:00
tokenization
[CI/Build] drop support for Python 3.8 EOL ( #8464 )
2024-11-06 07:11:55 +00:00
tool_use
[Frontend] Tool calling parser for Granite 3.0 models ( #9027 )
2024-11-07 07:09:02 -08:00
tpu
[torch.compile] integration with compilation control ( #9058 )
2024-10-10 12:39:36 -07:00
tracing
[BugFix] Prevent exporting duplicate OpenTelemetry spans ( #9017 )
2024-10-22 11:11:53 -07:00
v1
[V1] AsyncLLM Implementation ( #9826 )
2024-11-11 23:05:38 +00:00
weight_loading
[CI/Build] Add shell script linting using shellcheck ( #7925 )
2024-11-07 18:17:29 +00:00
worker
[2/N] executor pass the complete config to worker/modelrunner ( #9938 )
2024-11-02 07:35:05 -07:00
__init__.py
[Small] Formatter only checks lints in changed files ( #1528 )
2023-10-31 15:39:38 -07:00
conftest.py
[V1] Make v1 more testable ( #9888 )
2024-11-06 11:57:35 -08:00
test_cache_block_hashing.py
[Core] Make encoder-decoder inputs a nested structure to be more composable ( #9604 )
2024-11-05 10:07:31 +08:00
test_config.py
[Frontend][Core] Override HF config.json via CLI ( #5836 )
2024-11-09 16:19:27 +00:00
test_embedded_commit.py
[CI/Build] use setuptools-scm to set __version__ ( #4738 )
2024-09-23 09:44:26 -07:00
test_inputs.py
[Core][Frontend] Add Support for Inference Time mm_processor_kwargs ( #9131 )
2024-10-08 14:12:56 +00:00
test_logger.py
Rename vllm.logging to vllm.logging_utils ( #10134 )
2024-11-08 20:53:24 +00:00
test_logits_processor.py
[Core] Factor out common code in SequenceData and Sequence ( #8675 )
2024-09-21 02:30:39 +00:00
test_regression.py
Bugfix: fix broken of download models from modelscope ( #5233 )
2024-06-06 09:28:10 -07:00
test_sampling_params.py
[Bugfix] fix crash if max_tokens=None ( #2570 )
2024-01-23 22:38:55 -08:00
test_scalartype.py
[Bugfix] Fix support for dimension like integers and ScalarType ( #9299 )
2024-10-17 19:08:34 +00:00
test_sequence.py
[Core] Factor out common code in SequenceData and Sequence ( #8675 )
2024-09-21 02:30:39 +00:00
test_sharded_state_loader.py
[CI/Build] Replaced some models on tests for smaller ones ( #9570 )
2024-10-22 04:52:14 +00:00
test_utils.py
[Bugfix] Fix load config when using bools ( #9533 )
2024-10-27 13:46:41 -04:00
utils.py
Adds method to read the pooling types from model's files ( #9506 )
2024-11-07 08:42:40 +00:00