vllm/tests
Luka Govedič 4f93dfe952
[torch.compile] Fuse RMSNorm with quant (#9138)
Signed-off-by: luka <luka@neuralmagic.com>
Co-authored-by: youkaichao <youkaichao@126.com>
2024-11-08 21:20:08 +00:00
..
async_engine [MISC] Consolidate cleanup() and refactor offline_inference_with_prefix.py (#9510) 2024-10-18 14:30:55 -07:00
basic_correctness [Bugfix] Fix pickle of input when async output processing is on (#9931) 2024-11-06 00:39:26 +00:00
compile [torch.compile] Fuse RMSNorm with quant (#9138) 2024-11-08 21:20:08 +00:00
core [CI/Build] drop support for Python 3.8 EOL (#8464) 2024-11-06 07:11:55 +00:00
data [Bugfix] Fix load config when using bools (#9533) 2024-10-27 13:46:41 -04:00
distributed [Core][Distributed] Refactor ipc buffer init in CustomAllreduce (#10030) 2024-11-06 23:50:47 -08:00
encoder_decoder [Encoder Decoder] Add flash_attn kernel support for encoder-decoder models (#9559) 2024-11-01 23:22:49 -07:00
engine Adds method to read the pooling types from model's files (#9506) 2024-11-07 08:42:40 +00:00
entrypoints Online video support for VLMs (#10020) 2024-11-07 20:25:59 +00:00
fp8_kv Enable scaled FP8 (e4m3fn) KV cache on ROCm (AMD GPU) (#3290) 2024-04-03 14:15:55 -07:00
kernels [torch.compile] Fuse RMSNorm with quant (#9138) 2024-11-08 21:20:08 +00:00
lora [Bugfix][CI/Build][Hardware][AMD] Shard ID parameters in AMD tests running parallel jobs (#9279) 2024-11-04 11:37:46 -08:00
metrics [Frontend] Add max_tokens prometheus metric (#9881) 2024-11-04 22:53:24 +00:00
model_executor Adds method to read the pooling types from model's files (#9506) 2024-11-07 08:42:40 +00:00
models [CI/Build] Update CPU tests to include all "standard" tests (#5481) 2024-11-08 23:30:04 +08:00
mq_llm_engine [Bugfix][core] replace heartbeat with pid check (#9818) 2024-10-30 09:34:07 -07:00
multi_step [Core] Deprecating block manager v1 and make block manager v2 default (#8704) 2024-10-17 11:38:15 -05:00
multimodal [Frontend] Multi-Modality Support for Loading Local Image Files (#9915) 2024-11-04 15:34:57 +00:00
plugins/vllm_add_dummy_model [Model] VLM2Vec, the first multimodal embedding model in vLLM (#9303) 2024-10-16 14:31:00 +08:00
prefix_caching [Bugfix] Fix illegal memory access error with chunked prefill, prefix caching, block manager v2 and xformers enabled together (#9532) 2024-10-31 11:46:36 -07:00
prompt_adapter [CORE] Adding support for insertion of soft-tuned prompts (#4645) 2024-07-09 13:26:36 -07:00
prompts [BugFix] Fix input positions for long context with sliding window (#2088) 2023-12-13 12:28:13 -08:00
quantization 🐛 fix torch memory profiling (#9516) 2024-10-18 21:25:19 -04:00
samplers [CI/Build] drop support for Python 3.8 EOL (#8464) 2024-11-06 07:11:55 +00:00
spec_decode Disable spec-decode + chunked-prefill for draft models with tensor parallelism > 1 (#10136) 2024-11-08 15:56:18 +00:00
tensorizer_loader [MISC] Consolidate cleanup() and refactor offline_inference_with_prefix.py (#9510) 2024-10-18 14:30:55 -07:00
tokenization [CI/Build] drop support for Python 3.8 EOL (#8464) 2024-11-06 07:11:55 +00:00
tool_use [Frontend] Tool calling parser for Granite 3.0 models (#9027) 2024-11-07 07:09:02 -08:00
tpu [torch.compile] integration with compilation control (#9058) 2024-10-10 12:39:36 -07:00
tracing [BugFix] Prevent exporting duplicate OpenTelemetry spans (#9017) 2024-10-22 11:11:53 -07:00
v1/core [V1] Prefix caching (take 2) (#9972) 2024-11-07 17:34:44 -08:00
weight_loading [CI/Build] Add shell script linting using shellcheck (#7925) 2024-11-07 18:17:29 +00:00
worker [2/N] executor pass the complete config to worker/modelrunner (#9938) 2024-11-02 07:35:05 -07:00
__init__.py [Small] Formatter only checks lints in changed files (#1528) 2023-10-31 15:39:38 -07:00
conftest.py [V1] Make v1 more testable (#9888) 2024-11-06 11:57:35 -08:00
test_cache_block_hashing.py [Core] Make encoder-decoder inputs a nested structure to be more composable (#9604) 2024-11-05 10:07:31 +08:00
test_config.py Adds method to read the pooling types from model's files (#9506) 2024-11-07 08:42:40 +00:00
test_embedded_commit.py [CI/Build] use setuptools-scm to set __version__ (#4738) 2024-09-23 09:44:26 -07:00
test_inputs.py [Core][Frontend] Add Support for Inference Time mm_processor_kwargs (#9131) 2024-10-08 14:12:56 +00:00
test_logger.py Rename vllm.logging to vllm.logging_utils (#10134) 2024-11-08 20:53:24 +00:00
test_logits_processor.py [Core] Factor out common code in SequenceData and Sequence (#8675) 2024-09-21 02:30:39 +00:00
test_regression.py Bugfix: fix broken of download models from modelscope (#5233) 2024-06-06 09:28:10 -07:00
test_sampling_params.py [Bugfix] fix crash if max_tokens=None (#2570) 2024-01-23 22:38:55 -08:00
test_scalartype.py [Bugfix] Fix support for dimension like integers and ScalarType (#9299) 2024-10-17 19:08:34 +00:00
test_sequence.py [Core] Factor out common code in SequenceData and Sequence (#8675) 2024-09-21 02:30:39 +00:00
test_sharded_state_loader.py [CI/Build] Replaced some models on tests for smaller ones (#9570) 2024-10-22 04:52:14 +00:00
test_utils.py [Bugfix] Fix load config when using bools (#9533) 2024-10-27 13:46:41 -04:00
utils.py Adds method to read the pooling types from model's files (#9506) 2024-11-07 08:42:40 +00:00