| .. |
|
async_engine
|
Add metrics to RequestOutput (#2876)
|
2024-02-20 21:55:57 -08:00 |
|
basic_correctness
|
[Test] Add basic correctness test (#2908)
|
2024-02-18 16:44:50 -08:00 |
|
distributed
|
[Test] Add basic correctness test (#2908)
|
2024-02-18 16:44:50 -08:00 |
|
engine
|
Migrate linter from pylint to ruff (#1665)
|
2023-11-20 11:58:01 -08:00 |
|
entrypoints
|
multi-LoRA as extra models in OpenAI server (#2775)
|
2024-02-17 12:00:48 -08:00 |
|
kernels
|
[Minor] More fix of test_cache.py CI test failure (#2750)
|
2024-02-06 11:38:38 -08:00 |
|
lora
|
Add LoRA support for Mixtral (#2831)
|
2024-02-14 00:55:45 +01:00 |
|
metrics
|
Fix vllm:prompt_tokens_total metric calculation (#2869)
|
2024-02-18 23:55:41 -08:00 |
|
models
|
Support OLMo models. (#2832)
|
2024-02-18 21:05:15 -08:00 |
|
prefix_caching
|
[Experimental] Prefix Caching Support (#1669)
|
2024-01-17 16:32:10 -08:00 |
|
prompts
|
[BugFix] Fix input positions for long context with sliding window (#2088)
|
2023-12-13 12:28:13 -08:00 |
|
samplers
|
[FIX] Fix beam search test (#2930)
|
2024-02-20 14:37:39 -08:00 |
|
worker
|
Remove hardcoded device="cuda" to support more devices (#2503)
|
2024-02-01 15:46:39 -08:00 |
|
__init__.py
|
[Small] Formatter only checks lints in changed files (#1528)
|
2023-10-31 15:39:38 -07:00 |
|
conftest.py
|
Fix vllm:prompt_tokens_total metric calculation (#2869)
|
2024-02-18 23:55:41 -08:00 |
|
test_regression.py
|
[BugFix] Fix GC bug for LLM class (#2882)
|
2024-02-14 22:17:44 -08:00 |
|
test_sampling_params.py
|
[Bugfix] fix crash if max_tokens=None (#2570)
|
2024-01-23 22:38:55 -08:00 |