vllm/tests
Thomas Parnell 4ef95b0f06
[Bugfix] use float32 precision in samplers/test_logprobs.py for comparing with HF (#6409)
Signed-off-by: Thomas Parnell <tpa@zurich.ibm.com>
2024-07-15 13:14:49 -04:00
..
async_engine [ci] try to add multi-node tests (#6280) 2024-07-12 21:51:48 -07:00
basic_correctness [core][distributed] simplify code to support pipeline parallel (#6406) 2024-07-14 21:20:51 -07:00
core [Core] Optimize block_manager_v2 vs block_manager_v1 (to make V2 default) (#5602) 2024-07-01 20:10:37 -07:00
distributed [ci] try to add multi-node tests (#6280) 2024-07-12 21:51:48 -07:00
engine [Core] Pipeline Parallel Support (#4412) 2024-07-02 10:58:08 -07:00
entrypoints [BugFix] BatchResponseData body should be optional (#6345) 2024-07-15 04:06:09 +00:00
fp8_kv Enable scaled FP8 (e4m3fn) KV cache on ROCm (AMD GPU) (#3290) 2024-04-03 14:15:55 -07:00
kernels [ Misc ] Refactor Marlin Python Utilities (#6082) 2024-07-11 15:40:11 +00:00
lora [CORE] Adding support for insertion of soft-tuned prompts (#4645) 2024-07-09 13:26:36 -07:00
metrics [Misc] Extend vLLM Metrics logging API (#5925) 2024-06-29 10:36:06 +08:00
model_executor [CI/Build] Move test_utils.py to tests/utils.py (#4425) 2024-05-13 23:50:09 +09:00
models [Model] Initialize Fuyu-8B support (#3924) 2024-07-14 05:27:14 +00:00
multimodal [Core] Dynamic image size support for VLMs (#5276) 2024-07-02 20:34:00 -07:00
prefix_caching [mypy] Enable type checking for test directory (#5017) 2024-06-15 04:45:31 +00:00
prompt_adapter [CORE] Adding support for insertion of soft-tuned prompts (#4645) 2024-07-09 13:26:36 -07:00
prompts [BugFix] Fix input positions for long context with sliding window (#2088) 2023-12-13 12:28:13 -08:00
quantization [ Misc ] Refactor Marlin Python Utilities (#6082) 2024-07-11 15:40:11 +00:00
samplers [Bugfix] use float32 precision in samplers/test_logprobs.py for comparing with HF (#6409) 2024-07-15 13:14:49 -04:00
spec_decode [Speculative Decoding] Enabling bonus token in speculative decoding for KV cache based models (#5765) 2024-07-10 16:02:47 -07:00
tensorizer_loader [ci] try to add multi-node tests (#6280) 2024-07-12 21:51:48 -07:00
tokenization [ BugFix ] Prompt Logprobs Detokenization (#6223) 2024-07-11 22:02:29 +00:00
tracing [Misc] Add OpenTelemetry support (#4687) 2024-06-19 01:17:03 +09:00
worker [CORE] Adding support for insertion of soft-tuned prompts (#4645) 2024-07-09 13:26:36 -07:00
__init__.py [Small] Formatter only checks lints in changed files (#1528) 2023-10-31 15:39:38 -07:00
conftest.py [Core] Dynamic image size support for VLMs (#5276) 2024-07-02 20:34:00 -07:00
test_cache_block_hashing.py [mypy] Enable type checking for test directory (#5017) 2024-06-15 04:45:31 +00:00
test_config.py [Frontend] Customizable RoPE theta (#5197) 2024-06-11 10:42:26 -07:00
test_embedded_commit.py [Misc] Add generated git commit hash as vllm.__commit__ (#6386) 2024-07-12 22:52:15 +00:00
test_inputs.py [Core] Consolidate prompt arguments to LLM engines (#4328) 2024-05-28 13:29:31 -07:00
test_logger.py [mypy] Enable type checking for test directory (#5017) 2024-06-15 04:45:31 +00:00
test_logits_processor.py [CORE] Quantized lm-head Framework (#4442) 2024-07-02 22:25:17 +00:00
test_regression.py Bugfix: fix broken of download models from modelscope (#5233) 2024-06-06 09:28:10 -07:00
test_sampling_params.py [Bugfix] fix crash if max_tokens=None (#2570) 2024-01-23 22:38:55 -08:00
test_sequence.py [CI/Build] Move test_utils.py to tests/utils.py (#4425) 2024-05-13 23:50:09 +09:00
test_sharded_state_loader.py [CI] Upgrade codespell version. (#5381) 2024-06-12 10:06:14 -07:00
test_utils.py [CI/Build] Add unit testing for FlexibleArgumentParser (#5798) 2024-06-25 12:18:03 -07:00
utils.py [Feature] vLLM CLI (#5090) 2024-07-14 15:36:43 -07:00