vllm/tests
2024-07-19 23:08:15 +00:00
..
async_engine [BugFix][Frontend] Use LoRA tokenizer in OpenAI APIs (#6227) 2024-07-18 15:13:30 +08:00
basic_correctness [ci][test] add correctness test for cpu offloading (#6549) 2024-07-18 23:41:06 +00:00
core [Misc] Small perf improvements (#6520) 2024-07-19 12:10:56 -07:00
distributed [Core] Multiprocessing Pipeline Parallel support (#6130) 2024-07-18 19:15:52 -07:00
engine [Core] Pipeline Parallel Support (#4412) 2024-07-02 10:58:08 -07:00
entrypoints [Bugfix][Frontend] Fix missing /metrics endpoint (#6463) 2024-07-19 03:55:13 +00:00
fp8_kv Enable scaled FP8 (e4m3fn) KV cache on ROCm (AMD GPU) (#3290) 2024-04-03 14:15:55 -07:00
kernels [ Kernel ] Enable Dynamic Per Token fp8 (#6547) 2024-07-19 23:08:15 +00:00
lora [CORE] Adding support for insertion of soft-tuned prompts (#4645) 2024-07-09 13:26:36 -07:00
metrics [Bugfix] Fix Ray Metrics API usage (#6354) 2024-07-17 19:40:10 +00:00
model_executor [CI/Build] Move test_utils.py to tests/utils.py (#4425) 2024-05-13 23:50:09 +09:00
models [CI/Build] Remove "boardwalk" image asset (#6460) 2024-07-16 08:59:36 -07:00
multimodal [Core] Dynamic image size support for VLMs (#5276) 2024-07-02 20:34:00 -07:00
prefix_caching [mypy] Enable type checking for test directory (#5017) 2024-06-15 04:45:31 +00:00
prompt_adapter [CORE] Adding support for insertion of soft-tuned prompts (#4645) 2024-07-09 13:26:36 -07:00
prompts [BugFix] Fix input positions for long context with sliding window (#2088) 2023-12-13 12:28:13 -08:00
quantization [Kernel][Attention] Separate Attention.kv_scale into k_scale and v_scale (#6081) 2024-07-16 15:31:32 -07:00
samplers [Bugfix] Make spec. decode respect per-request seed. (#6034) 2024-07-18 19:22:08 -07:00
spec_decode [Bugfix] [SpecDecode] AsyncMetricsCollector: update time since last collection (#6578) 2024-07-19 14:01:03 -07:00
tensorizer_loader [Doc][CI/Build] Update docs and tests to use vllm serve (#6431) 2024-07-17 07:43:21 +00:00
tokenization [ BugFix ] Prompt Logprobs Detokenization (#6223) 2024-07-11 22:02:29 +00:00
tracing [Misc] Add OpenTelemetry support (#4687) 2024-06-19 01:17:03 +09:00
worker [Core] Refactor _prepare_model_input_tensors - take 2 (#6164) 2024-07-17 09:37:16 -07:00
__init__.py [Small] Formatter only checks lints in changed files (#1528) 2023-10-31 15:39:38 -07:00
conftest.py [CI/Build] Remove "boardwalk" image asset (#6460) 2024-07-16 08:59:36 -07:00
test_cache_block_hashing.py [mypy] Enable type checking for test directory (#5017) 2024-06-15 04:45:31 +00:00
test_config.py [Frontend] Customizable RoPE theta (#5197) 2024-06-11 10:42:26 -07:00
test_embedded_commit.py [Misc] Add generated git commit hash as vllm.__commit__ (#6386) 2024-07-12 22:52:15 +00:00
test_inputs.py [Core] Consolidate prompt arguments to LLM engines (#4328) 2024-05-28 13:29:31 -07:00
test_logger.py [mypy] Enable type checking for test directory (#5017) 2024-06-15 04:45:31 +00:00
test_logits_processor.py [CORE] Quantized lm-head Framework (#4442) 2024-07-02 22:25:17 +00:00
test_regression.py Bugfix: fix broken of download models from modelscope (#5233) 2024-06-06 09:28:10 -07:00
test_sampling_params.py [Bugfix] fix crash if max_tokens=None (#2570) 2024-01-23 22:38:55 -08:00
test_sequence.py [CI/Build] Move test_utils.py to tests/utils.py (#4425) 2024-05-13 23:50:09 +09:00
test_sharded_state_loader.py [CI] Upgrade codespell version. (#5381) 2024-06-12 10:06:14 -07:00
test_utils.py [CI/Build] Add unit testing for FlexibleArgumentParser (#5798) 2024-06-25 12:18:03 -07:00
utils.py [ci][test] add correctness test for cpu offloading (#6549) 2024-07-18 23:41:06 +00:00