vllm/tests
2024-05-03 15:52:01 -07:00
..
async_engine Fix/async chat serving (#2727) 2024-05-03 11:04:14 -07:00
basic_correctness [Kernel] Use flashinfer for decoding (#4353) 2024-05-03 15:51:27 -07:00
core [Core] Ignore infeasible swap requests. (#4557) 2024-05-02 14:31:20 -07:00
distributed [Kernel] Use flashinfer for decoding (#4353) 2024-05-03 15:51:27 -07:00
engine [Core] Add multiproc_worker_utils for multiprocessing-based workers (#4357) 2024-05-01 18:41:59 +00:00
entrypoints Fix/async chat serving (#2727) 2024-05-03 11:04:14 -07:00
fp8_kv Enable scaled FP8 (e4m3fn) KV cache on ROCm (AMD GPU) (#3290) 2024-04-03 14:15:55 -07:00
kernels [Kernel] Use flashinfer for decoding (#4353) 2024-05-03 15:51:27 -07:00
lora [Kernel] Full Tensor Parallelism for LoRA Layers (#3524) 2024-04-27 00:03:48 -07:00
metrics [CI]Add regression tests to ensure the async engine generates metrics (#4524) 2024-05-01 19:57:12 -07:00
model_executor [Core] Support offline use of local cache for models (#4374) 2024-04-27 09:59:55 -07:00
models [Kernel] Support running GPTQ 8-bit models in Marlin (#4533) 2024-05-02 12:56:22 -04:00
prefix_caching [Core][Bugfix]Refactor block manager for better testability (#3492) 2024-03-27 23:59:28 -07:00
prompts [BugFix] Fix input positions for long context with sliding window (#2088) 2023-12-13 12:28:13 -08:00
quantization [Kernel] Marlin Expansion: Support AutoGPTQ Models with Marlin (#3922) 2024-04-29 09:35:34 -07:00
samplers [Core][Model runner refactoring 1/N] Refactor attn metadata term (#4518) 2024-05-03 10:20:12 -07:00
spec_decode [Speculative decoding] Support target-model logprobs (#4378) 2024-05-03 15:52:01 -07:00
tensorizer_loader [Core][Distributed] use cpu group to broadcast metadata in cpu (#4444) 2024-04-29 13:52:22 -07:00
tokenization [Bugfix] Fix parameter name in get_tokenizer (#4107) 2024-04-25 19:10:48 -07:00
worker [Core][Model runner refactoring 1/N] Refactor attn metadata term (#4518) 2024-05-03 10:20:12 -07:00
__init__.py [Small] Formatter only checks lints in changed files (#1528) 2023-10-31 15:39:38 -07:00
conftest.py [Bug fix][Core] assert num_new_tokens == 1 fails when SamplingParams.n is not 1 and max_tokens is large & Add tests for preemption (#4451) 2024-05-01 19:24:13 -07:00
test_cache_block_hashing.py [CI] Try introducing isort. (#3495) 2024-03-25 07:59:47 -07:00
test_config.py [Core] Refactor model loading code (#4097) 2024-04-16 11:34:39 -07:00
test_logger.py [MISC] Rework logger to enable pythonic custom logging configuration to be provided (#4273) 2024-05-01 17:34:40 -07:00
test_logits_processor.py [Core][Model runner refactoring 1/N] Refactor attn metadata term (#4518) 2024-05-03 10:20:12 -07:00
test_regression.py [BugFix] Fix GC bug for LLM class (#2882) 2024-02-14 22:17:44 -08:00
test_sampling_params.py [Bugfix] fix crash if max_tokens=None (#2570) 2024-01-23 22:38:55 -08:00
test_sequence.py [Chunked Prefill][4/n] Chunked prefill scheduler. (#3853) 2024-04-05 10:17:58 -07:00