vllm/tests
Sage Moore ce4f5a29fb
Add Automatic Prefix Caching (#2762)
Co-authored-by: ElizaWszola <eliza@neuralmagic.com>
Co-authored-by: Michael Goin <michael@neuralmagic.com>
2024-03-02 00:50:01 -08:00
..
async_engine Add metrics to RequestOutput (#2876) 2024-02-20 21:55:57 -08:00
basic_correctness [Test] Add basic correctness test (#2908) 2024-02-18 16:44:50 -08:00
distributed [Test] Add basic correctness test (#2908) 2024-02-18 16:44:50 -08:00
engine Migrate linter from pylint to ruff (#1665) 2023-11-20 11:58:01 -08:00
entrypoints Add guided decoding for OpenAI API server (#2819) 2024-02-29 22:13:08 +00:00
kernels Enable GQA support in the prefix prefill kernels (#3007) 2024-02-27 01:14:31 -08:00
lora Add LoRA support for Gemma (#3050) 2024-02-28 13:03:28 -08:00
metrics Port metrics from aioprometheus to prometheus_client (#2730) 2024-02-25 11:54:00 -08:00
models Integrate Marlin Kernels for Int4 GPTQ inference (#2497) 2024-03-01 12:47:51 -08:00
prefix_caching Add Automatic Prefix Caching (#2762) 2024-03-02 00:50:01 -08:00
prompts [BugFix] Fix input positions for long context with sliding window (#2088) 2023-12-13 12:28:13 -08:00
samplers Support per-request seed (#2514) 2024-02-21 11:47:00 -08:00
worker Remove hardcoded device="cuda" to support more devices (#2503) 2024-02-01 15:46:39 -08:00
__init__.py [Small] Formatter only checks lints in changed files (#1528) 2023-10-31 15:39:38 -07:00
conftest.py Integrate Marlin Kernels for Int4 GPTQ inference (#2497) 2024-03-01 12:47:51 -08:00
test_cache_block_hashing.py Add Automatic Prefix Caching (#2762) 2024-03-02 00:50:01 -08:00
test_regression.py [BugFix] Fix GC bug for LLM class (#2882) 2024-02-14 22:17:44 -08:00
test_sampling_params.py [Bugfix] fix crash if max_tokens=None (#2570) 2024-01-23 22:38:55 -08:00