vllm/tests
2024-08-28 22:18:13 -07:00
..
async_engine [Tests] Disable retries and use context manager for openai client (#7565) 2024-08-26 21:33:17 -07:00
basic_correctness [Performance] Enable chunked prefill and prefix caching together (#7753) 2024-08-28 00:36:31 -07:00
compile [torch.compile] avoid Dynamo guard evaluation overhead (#7898) 2024-08-28 16:10:12 -07:00
core [Performance] Enable chunked prefill and prefix caching together (#7753) 2024-08-28 00:36:31 -07:00
distributed [Ray backend] Better error when pg topology is bad. (#7584) 2024-08-22 17:44:25 -07:00
engine [Core] Asynchronous Output Processor (#7049) 2024-08-26 20:53:20 -07:00
entrypoints [Tests] Disable retries and use context manager for openai client (#7565) 2024-08-26 21:33:17 -07:00
fp8_kv Enable scaled FP8 (e4m3fn) KV cache on ROCm (AMD GPU) (#3290) 2024-04-03 14:15:55 -07:00
kernels Revert "[Core][Kernels] Use FlashInfer backend for FP8 KV Cache when available." (#7982) 2024-08-28 21:27:06 -07:00
lora [Core] Add multi-step support to LLMEngine (#7789) 2024-08-23 12:45:53 -07:00
metrics [Bugfix] StatLoggers: cache spec decode metrics when they get collected. (#6645) 2024-07-23 23:05:05 +00:00
model_executor [CI/Build] Move test_utils.py to tests/utils.py (#4425) 2024-05-13 23:50:09 +09:00
models [Model] Add multi-image input support for LLaVA-Next offline inference (#7230) 2024-08-28 07:09:02 +08:00
multi_step [Tests] Disable retries and use context manager for openai client (#7565) 2024-08-26 21:33:17 -07:00
multimodal [VLM][Core] Fix exceptions on ragged NestedTensors (#7974) 2024-08-29 03:24:31 +00:00
plugins/vllm_add_dummy_model [misc][plugin] add plugin system implementation (#7426) 2024-08-13 16:24:17 -07:00
prefix_caching [MISC] Add prefix cache hit rate to metrics (#7606) 2024-08-19 11:52:07 -07:00
prompt_adapter [CORE] Adding support for insertion of soft-tuned prompts (#4645) 2024-07-09 13:26:36 -07:00
prompts [BugFix] Fix input positions for long context with sliding window (#2088) 2023-12-13 12:28:13 -08:00
quantization [Kernel] Expand MoE weight loading + Add Fused Marlin MoE Kernel (#7766) 2024-08-27 15:07:09 -07:00
samplers [mypy][CI/Build] Fix mypy errors (#7929) 2024-08-27 23:47:44 -07:00
spec_decode [Bugfix] Unify rank computation across regular decoding and speculative decoding (#7899) 2024-08-28 22:18:13 -07:00
tensorizer_loader [mypy] Misc. typing improvements (#7417) 2024-08-13 09:20:20 +08:00
tokenization [Core] Allow specifying custom Executor (#6557) 2024-07-20 01:25:06 +00:00
tpu [torch.compile] remove reset (#7975) 2024-08-28 17:32:26 -07:00
tracing [Core] Fix tracking of model forward time in case of PP>1 (#7440) 2024-08-16 13:46:01 -07:00
weight_loading [Kernel] Expand MoE weight loading + Add Fused Marlin MoE Kernel (#7766) 2024-08-27 15:07:09 -07:00
worker [Core] Add AttentionState abstraction (#7663) 2024-08-20 18:50:45 +00:00
__init__.py [Small] Formatter only checks lints in changed files (#1528) 2023-10-31 15:39:38 -07:00
conftest.py [Model] Add multi-image input support for LLaVA-Next offline inference (#7230) 2024-08-28 07:09:02 +08:00
test_cache_block_hashing.py [mypy] Enable type checking for test directory (#5017) 2024-06-15 04:45:31 +00:00
test_config.py [Bugfix] Bump transformers to 4.43.2 (#6752) 2024-07-24 13:22:16 -07:00
test_embedded_commit.py [Misc] Add generated git commit hash as vllm.__commit__ (#6386) 2024-07-12 22:52:15 +00:00
test_inputs.py [Core] Support serving encoder/decoder models (#7258) 2024-08-09 10:39:41 +08:00
test_logger.py [ci][test] fix engine/logger test (#7621) 2024-08-16 23:00:59 -07:00
test_logits_processor.py [Core] Optimize SPMD architecture with delta + serialization optimization (#7109) 2024-08-18 17:57:20 -07:00
test_regression.py Bugfix: fix broken of download models from modelscope (#5233) 2024-06-06 09:28:10 -07:00
test_sampling_params.py [Bugfix] fix crash if max_tokens=None (#2570) 2024-01-23 22:38:55 -08:00
test_scalartype.py [Misc] Disambiguate quantized types via a new ScalarType (#6396) 2024-08-02 13:51:58 -07:00
test_sequence.py [Core] Optimize SPMD architecture with delta + serialization optimization (#7109) 2024-08-18 17:57:20 -07:00
test_sharded_state_loader.py [CI] Upgrade codespell version. (#5381) 2024-06-12 10:06:14 -07:00
test_utils.py [mypy] Misc. typing improvements (#7417) 2024-08-13 09:20:20 +08:00
utils.py [Tests] Disable retries and use context manager for openai client (#7565) 2024-08-26 21:33:17 -07:00