vllm/tests
Kunshang Ji 96b6f475dd
Remove hardcoded device="cuda" to support more devices (#2503)
Co-authored-by: Jiang Li <jiang1.li@intel.com>
Co-authored-by: Kunshang Ji <kunshang.ji@intel.com>
2024-02-01 15:46:39 -08:00
..
async_engine [Experimental] Add multi-LoRA support (#1804) 2024-01-23 15:26:37 -08:00
distributed Implement custom all reduce kernels (#2192) 2024-01-27 12:46:35 -08:00
engine Migrate linter from pylint to ruff (#1665) 2023-11-20 11:58:01 -08:00
entrypoints Support Batch Completion in Server (#2529) 2024-01-24 17:11:07 -08:00
kernels Remove hardcoded device="cuda" to support more devices (#2503) 2024-02-01 15:46:39 -08:00
lora Remove hardcoded device="cuda" to support more devices (#2503) 2024-02-01 15:46:39 -08:00
models Add StableLM3B model (#2372) 2024-01-16 20:32:40 -08:00
prefix_caching [Experimental] Prefix Caching Support (#1669) 2024-01-17 16:32:10 -08:00
prompts [BugFix] Fix input positions for long context with sliding window (#2088) 2023-12-13 12:28:13 -08:00
samplers Remove hardcoded device="cuda" to support more devices (#2503) 2024-02-01 15:46:39 -08:00
worker Remove hardcoded device="cuda" to support more devices (#2503) 2024-02-01 15:46:39 -08:00
__init__.py [Small] Formatter only checks lints in changed files (#1528) 2023-10-31 15:39:38 -07:00
conftest.py [BUGFIX] Fix the path of test prompts (#2273) 2023-12-26 10:37:21 -08:00
test_regression.py [Bugfix] fix crash if max_tokens=None (#2570) 2024-01-23 22:38:55 -08:00
test_sampling_params.py [Bugfix] fix crash if max_tokens=None (#2570) 2024-01-23 22:38:55 -08:00