vllm/tests
Antoni Baum 22de45235c
Push logprob generation to LLMEngine (#3065)
Co-authored-by: Avnish Narayan <avnish@anyscale.com>
2024-03-04 19:54:06 +00:00
..
async_engine Add metrics to RequestOutput (#2876) 2024-02-20 21:55:57 -08:00
basic_correctness [Test] Add basic correctness test (#2908) 2024-02-18 16:44:50 -08:00
distributed [Test] Add basic correctness test (#2908) 2024-02-18 16:44:50 -08:00
engine Migrate linter from pylint to ruff (#1665) 2023-11-20 11:58:01 -08:00
entrypoints Push logprob generation to LLMEngine (#3065) 2024-03-04 19:54:06 +00:00
kernels Enable GQA support in the prefix prefill kernels (#3007) 2024-02-27 01:14:31 -08:00
lora Add LoRA support for Gemma (#3050) 2024-02-28 13:03:28 -08:00
metrics Port metrics from aioprometheus to prometheus_client (#2730) 2024-02-25 11:54:00 -08:00
models Integrate Marlin Kernels for Int4 GPTQ inference (#2497) 2024-03-01 12:47:51 -08:00
prefix_caching Add Automatic Prefix Caching (#2762) 2024-03-02 00:50:01 -08:00
prompts [BugFix] Fix input positions for long context with sliding window (#2088) 2023-12-13 12:28:13 -08:00
samplers Push logprob generation to LLMEngine (#3065) 2024-03-04 19:54:06 +00:00
worker Push logprob generation to LLMEngine (#3065) 2024-03-04 19:54:06 +00:00
__init__.py [Small] Formatter only checks lints in changed files (#1528) 2023-10-31 15:39:38 -07:00
conftest.py Integrate Marlin Kernels for Int4 GPTQ inference (#2497) 2024-03-01 12:47:51 -08:00
test_cache_block_hashing.py Add Automatic Prefix Caching (#2762) 2024-03-02 00:50:01 -08:00
test_regression.py [BugFix] Fix GC bug for LLM class (#2882) 2024-02-14 22:17:44 -08:00
test_sampling_params.py [Bugfix] fix crash if max_tokens=None (#2570) 2024-01-23 22:38:55 -08:00