vllm/tests
2024-02-26 10:39:34 +08:00
..
async_engine Add metrics to RequestOutput (#2876) 2024-02-20 21:55:57 -08:00
basic_correctness [Test] Add basic correctness test (#2908) 2024-02-18 16:44:50 -08:00
distributed [Test] Add basic correctness test (#2908) 2024-02-18 16:44:50 -08:00
engine Migrate linter from pylint to ruff (#1665) 2023-11-20 11:58:01 -08:00
entrypoints Add LogProbs for Chat Completions in OpenAI (#2918) 2024-02-26 10:39:34 +08:00
kernels Optimize GeGLU layer in Gemma (#2975) 2024-02-21 20:17:52 -08:00
lora chore(vllm): codespell for spell checking (#2820) 2024-02-21 18:56:01 -08:00
metrics Port metrics from aioprometheus to prometheus_client (#2730) 2024-02-25 11:54:00 -08:00
models Support OLMo models. (#2832) 2024-02-18 21:05:15 -08:00
prefix_caching [Experimental] Prefix Caching Support (#1669) 2024-01-17 16:32:10 -08:00
prompts [BugFix] Fix input positions for long context with sliding window (#2088) 2023-12-13 12:28:13 -08:00
samplers Support per-request seed (#2514) 2024-02-21 11:47:00 -08:00
worker Remove hardcoded device="cuda" to support more devices (#2503) 2024-02-01 15:46:39 -08:00
__init__.py [Small] Formatter only checks lints in changed files (#1528) 2023-10-31 15:39:38 -07:00
conftest.py Port metrics from aioprometheus to prometheus_client (#2730) 2024-02-25 11:54:00 -08:00
test_regression.py [BugFix] Fix GC bug for LLM class (#2882) 2024-02-14 22:17:44 -08:00
test_sampling_params.py [Bugfix] fix crash if max_tokens=None (#2570) 2024-01-23 22:38:55 -08:00