vllm/tests
2024-08-09 05:42:45 +00:00
..
async_engine [Frontend] Gracefully handle missing chat template and fix CI failure (#7238) 2024-08-07 09:12:05 +00:00
basic_correctness [Bugfix] Fix GPTQ and GPTQ Marlin CPU Offloading (#7225) 2024-08-06 18:34:26 -07:00
core [Bugfix][fast] Fix the get_num_blocks_touched logic (#6849) 2024-08-08 10:43:30 -07:00
distributed [Core] Support serving encoder/decoder models (#7258) 2024-08-09 10:39:41 +08:00
engine [Frontend] Refactor prompt processing (#4028) 2024-07-22 10:13:53 -07:00
entrypoints [Core] Support serving encoder/decoder models (#7258) 2024-08-09 10:39:41 +08:00
fp8_kv Enable scaled FP8 (e4m3fn) KV cache on ROCm (AMD GPU) (#3290) 2024-04-03 14:15:55 -07:00
kernels [Bugfix][Kernel] Increased atol to fix failing tests (#7305) 2024-08-08 12:16:13 -04:00
lora [LoRA] Relax LoRA condition (#7146) 2024-08-06 01:57:25 +00:00
metrics [Bugfix] StatLoggers: cache spec decode metrics when they get collected. (#6645) 2024-07-23 23:05:05 +00:00
model_executor [CI/Build] Move test_utils.py to tests/utils.py (#4425) 2024-05-13 23:50:09 +09:00
models [Core] Support serving encoder/decoder models (#7258) 2024-08-09 10:39:41 +08:00
multimodal [Misc] Manage HTTP connections in one place (#6600) 2024-07-22 21:32:02 -07:00
prefix_caching [Bugfix] Fix block table for seqs that have prefix cache hits (#7018) 2024-08-02 22:38:15 -07:00
prompt_adapter [CORE] Adding support for insertion of soft-tuned prompts (#4645) 2024-07-09 13:26:36 -07:00
prompts [BugFix] Fix input positions for long context with sliding window (#2088) 2023-12-13 12:28:13 -08:00
quantization [Bugfix][FP8] Fix dynamic FP8 Marlin quantization (#7219) 2024-08-07 11:23:12 -07:00
samplers [Speculative decoding] [Multi-Step] decouple should_modify_greedy_probs_inplace (#6971) 2024-08-09 05:42:45 +00:00
spec_decode [Bugfix] Fix speculative decoding with MLPSpeculator with padded vocabulary (#7218) 2024-08-08 22:08:46 -07:00
tensorizer_loader [Bugfix] Fix tensorizer memory profiling bug during testing (#6881) 2024-07-30 11:48:50 -07:00
tokenization [Core] Allow specifying custom Executor (#6557) 2024-07-20 01:25:06 +00:00
tracing [Misc] Add OpenTelemetry support (#4687) 2024-06-19 01:17:03 +09:00
worker [Core] Subclass ModelRunner to support cross-attention & encoder sequences (towards eventual encoder/decoder model support) (#4942) 2024-08-06 16:51:47 -04:00
__init__.py [Small] Formatter only checks lints in changed files (#1528) 2023-10-31 15:39:38 -07:00
conftest.py [Core] Support serving encoder/decoder models (#7258) 2024-08-09 10:39:41 +08:00
test_cache_block_hashing.py [mypy] Enable type checking for test directory (#5017) 2024-06-15 04:45:31 +00:00
test_config.py [Bugfix] Bump transformers to 4.43.2 (#6752) 2024-07-24 13:22:16 -07:00
test_embedded_commit.py [Misc] Add generated git commit hash as vllm.__commit__ (#6386) 2024-07-12 22:52:15 +00:00
test_inputs.py [Core] Support serving encoder/decoder models (#7258) 2024-08-09 10:39:41 +08:00
test_logger.py [mypy] Enable type checking for test directory (#5017) 2024-06-15 04:45:31 +00:00
test_logits_processor.py [CORE] Quantized lm-head Framework (#4442) 2024-07-02 22:25:17 +00:00
test_regression.py Bugfix: fix broken of download models from modelscope (#5233) 2024-06-06 09:28:10 -07:00
test_sampling_params.py [Bugfix] fix crash if max_tokens=None (#2570) 2024-01-23 22:38:55 -08:00
test_scalartype.py [Misc] Disambiguate quantized types via a new ScalarType (#6396) 2024-08-02 13:51:58 -07:00
test_sequence.py [CI/Build] Move test_utils.py to tests/utils.py (#4425) 2024-05-13 23:50:09 +09:00
test_sharded_state_loader.py [CI] Upgrade codespell version. (#5381) 2024-06-12 10:06:14 -07:00
test_utils.py [BugFix] Overhaul async request cancellation (#7111) 2024-08-07 13:21:41 +08:00
utils.py [Frontend] Gracefully handle missing chat template and fix CI failure (#7238) 2024-08-07 09:12:05 +00:00