..
async_engine
[Frontend] Clean up type annotations for mistral tokenizer ( #8314 )
2024-09-10 16:49:11 +00:00
basic_correctness
[MISC] Consolidate FP8 kv-cache tests ( #8131 )
2024-09-04 18:53:25 +00:00
compile
[tpu][misc] fix typo ( #8260 )
2024-09-06 22:40:46 -07:00
core
[Performance] Enable chunked prefill and prefix caching together ( #7753 )
2024-08-28 00:36:31 -07:00
data
[Frontend]-config-cli-args ( #7737 )
2024-08-30 08:21:02 -07:00
distributed
[Bugfix] Fix InternVL2 vision embeddings process with pipeline parallel ( #8299 )
2024-09-11 10:11:01 +08:00
engine
[CI/Build] Increasing timeout for multiproc worker tests ( #8203 )
2024-09-06 11:51:03 -07:00
entrypoints
[Core] Support load and unload LoRA in api server ( #6566 )
2024-09-05 18:10:33 -07:00
fp8_kv
Enable scaled FP8 (e4m3fn) KV cache on ROCm (AMD GPU) ( #3290 )
2024-04-03 14:15:55 -07:00
kernels
[Misc] Fused MoE Marlin support for GPTQ ( #8217 )
2024-09-09 23:02:52 -04:00
lora
[CI] Change test input in Gemma LoRA test ( #8163 )
2024-09-04 13:05:50 -07:00
metrics
[Bugfix] StatLoggers: cache spec decode metrics when they get collected. ( #6645 )
2024-07-23 23:05:05 +00:00
model_executor
[CI/Build] Move test_utils.py to tests/utils.py ( #4425 )
2024-05-13 23:50:09 +09:00
models
[model] Support for Llava-Next-Video model ( #7559 )
2024-09-10 22:21:36 -07:00
multi_step
[Frontend] Add --logprobs argument to benchmark_serving.py ( #8191 )
2024-09-06 09:01:14 -07:00
multimodal
[VLM][Core] Fix exceptions on ragged NestedTensors ( #7974 )
2024-08-29 03:24:31 +00:00
plugins /vllm_add_dummy_model
[misc][plugin] add plugin system implementation ( #7426 )
2024-08-13 16:24:17 -07:00
prefix_caching
[MISC] Add prefix cache hit rate to metrics ( #7606 )
2024-08-19 11:52:07 -07:00
prompt_adapter
[CORE] Adding support for insertion of soft-tuned prompts ( #4645 )
2024-07-09 13:26:36 -07:00
prompts
[BugFix] Fix input positions for long context with sliding window ( #2088 )
2023-12-13 12:28:13 -08:00
quantization
support bitsandbytes 8-bit and FP4 quantized models ( #7445 )
2024-08-29 19:09:08 -04:00
samplers
[SpecDecode][Kernel] Flashinfer Rejection Sampling ( #7244 )
2024-09-01 21:23:29 -07:00
spec_decode
[SpecDecode][Kernel] Flashinfer Rejection Sampling ( #7244 )
2024-09-01 21:23:29 -07:00
tensorizer_loader
[mypy] Misc. typing improvements ( #7417 )
2024-08-13 09:20:20 +08:00
tokenization
[Core] Allow specifying custom Executor ( #6557 )
2024-07-20 01:25:06 +00:00
tool_use
[Bugfix] Streamed tool calls now more strictly follow OpenAI's format; ensures Vercel AI SDK compatibility ( #8272 )
2024-09-09 10:45:11 -04:00
tpu
[torch.compile] remove reset ( #7975 )
2024-08-28 17:32:26 -07:00
tracing
[Core] Fix tracking of model forward time in case of PP>1 ( #7440 )
2024-08-16 13:46:01 -07:00
weight_loading
[Misc] Fused MoE Marlin support for GPTQ ( #8217 )
2024-09-09 23:02:52 -04:00
worker
[Core] Add AttentionState abstraction ( #7663 )
2024-08-20 18:50:45 +00:00
__init__.py
[Small] Formatter only checks lints in changed files ( #1528 )
2023-10-31 15:39:38 -07:00
conftest.py
[model] Support for Llava-Next-Video model ( #7559 )
2024-09-10 22:21:36 -07:00
test_cache_block_hashing.py
[mypy] Enable type checking for test directory ( #5017 )
2024-06-15 04:45:31 +00:00
test_config.py
[Bugfix] Bump transformers to 4.43.2 ( #6752 )
2024-07-24 13:22:16 -07:00
test_embedded_commit.py
[Misc] Add generated git commit hash as vllm.__commit__ ( #6386 )
2024-07-12 22:52:15 +00:00
test_inputs.py
[Core] Support serving encoder/decoder models ( #7258 )
2024-08-09 10:39:41 +08:00
test_logger.py
[CI/Build] Use python 3.12 in cuda image ( #8133 )
2024-09-07 13:03:16 -07:00
test_logits_processor.py
[Core] Optimize SPMD architecture with delta + serialization optimization ( #7109 )
2024-08-18 17:57:20 -07:00
test_regression.py
Bugfix: fix broken of download models from modelscope ( #5233 )
2024-06-06 09:28:10 -07:00
test_sampling_params.py
[Bugfix] fix crash if max_tokens=None ( #2570 )
2024-01-23 22:38:55 -08:00
test_scalartype.py
[Misc] Disambiguate quantized types via a new ScalarType ( #6396 )
2024-08-02 13:51:58 -07:00
test_sequence.py
[Core] Logprobs support in Multi-step ( #7652 )
2024-08-29 19:19:08 -07:00
test_sharded_state_loader.py
[CI] Upgrade codespell version. ( #5381 )
2024-06-12 10:06:14 -07:00
test_utils.py
[Frontend]-config-cli-args ( #7737 )
2024-08-30 08:21:02 -07:00
utils.py
[Bugfix] Fix broken OpenAI tensorizer test ( #8258 )
2024-09-07 08:02:39 +00:00