vllm/tests
Florian Greinacher a494140433
[Frontend] Support complex message content for chat completions endpoint (#3467)
Co-authored-by: Lily Liu <lilyliupku@gmail.com>
Co-authored-by: Cyrus Leung <tlleungac@connect.ust.hk>
2024-04-30 16:28:46 -07:00
..
async_engine [Bugfix] Abort requests when the connection to /v1/completions is interrupted (#4363) 2024-04-27 09:48:37 -07:00
basic_correctness [Test] Test multiple attn backend for chunked prefill. (#4023) 2024-04-12 09:56:57 -07:00
core [Core] Scheduling optimization 2 (#4280) 2024-04-23 08:02:11 +00:00
distributed [Core][Distributed] use cpu/gloo to initialize pynccl (#4248) 2024-04-23 18:32:19 -07:00
engine Make initialization of tokenizer and detokenizer optional (#3748) 2024-04-21 22:06:46 +00:00
entrypoints [Frontend] Support complex message content for chat completions endpoint (#3467) 2024-04-30 16:28:46 -07:00
fp8_kv Enable scaled FP8 (e4m3fn) KV cache on ROCm (AMD GPU) (#3290) 2024-04-03 14:15:55 -07:00
kernels [Bugfix][Kernel] allow non-power-of-two head sizes in prefix prefill (#4128) 2024-04-18 00:51:28 -07:00
lora [Kernel] Full Tensor Parallelism for LoRA Layers (#3524) 2024-04-27 00:03:48 -07:00
metrics Re-enable the 80 char line width limit (#3305) 2024-03-10 19:49:14 -07:00
model_executor [Core] Support offline use of local cache for models (#4374) 2024-04-27 09:59:55 -07:00
models [Kernel] Support Fp8 Checkpoints (Dynamic + Static) (#4332) 2024-04-30 21:46:12 +00:00
prefix_caching [Core][Bugfix]Refactor block manager for better testability (#3492) 2024-03-27 23:59:28 -07:00
prompts [BugFix] Fix input positions for long context with sliding window (#2088) 2023-12-13 12:28:13 -08:00
quantization [Kernel] Marlin Expansion: Support AutoGPTQ Models with Marlin (#3922) 2024-04-29 09:35:34 -07:00
samplers [BugFix] Fix min_tokens when eos_token_id is None (#4389) 2024-04-27 09:52:46 -07:00
spec_decode [BugFix] fix num_lookahead_slots missing in async executor (#4165) 2024-04-30 10:12:59 -07:00
tensorizer_loader [Core][Distributed] use cpu group to broadcast metadata in cpu (#4444) 2024-04-29 13:52:22 -07:00
tokenization [Bugfix] Fix parameter name in get_tokenizer (#4107) 2024-04-25 19:10:48 -07:00
worker [Core][Distributed] use cpu group to broadcast metadata in cpu (#4444) 2024-04-29 13:52:22 -07:00
__init__.py [Small] Formatter only checks lints in changed files (#1528) 2023-10-31 15:39:38 -07:00
conftest.py [BugFix] Fix handling of stop strings and stop token ids (#3672) 2024-04-11 15:34:12 -07:00
test_cache_block_hashing.py [CI] Try introducing isort. (#3495) 2024-03-25 07:59:47 -07:00
test_config.py [Core] Refactor model loading code (#4097) 2024-04-16 11:34:39 -07:00
test_logger.py [Core] add an option to log every function call to for debugging hang/crash in distributed inference (#4079) 2024-04-18 16:15:12 -07:00
test_logits_processor.py [Core] Refactoring sampler and support prompt logprob for chunked prefill (#4309) 2024-04-26 13:02:02 +00:00
test_regression.py [BugFix] Fix GC bug for LLM class (#2882) 2024-02-14 22:17:44 -08:00
test_sampling_params.py [Bugfix] fix crash if max_tokens=None (#2570) 2024-01-23 22:38:55 -08:00
test_sequence.py [Chunked Prefill][4/n] Chunked prefill scheduler. (#3853) 2024-04-05 10:17:58 -07:00