vllm/tests
Lucas Wilkinson d200972e7f
[Bugfix] Marlin 2:4 temp fix for large M dim (>256) (#10464)
Signed-off-by: Lucas Wilkinson <lwilkinson@neuralmagic.com>
2024-11-19 19:40:33 -08:00
..
async_engine [MISC] Consolidate cleanup() and refactor offline_inference_with_prefix.py (#9510) 2024-10-18 14:30:55 -07:00
basic_correctness [Bugfix] Fix pickle of input when async output processing is on (#9931) 2024-11-06 00:39:26 +00:00
compile [6/N] torch.compile rollout to users (#10437) 2024-11-19 10:09:03 -08:00
core [CI/Build] drop support for Python 3.8 EOL (#8464) 2024-11-06 07:11:55 +00:00
data [Bugfix] Fix load config when using bools (#9533) 2024-10-27 13:46:41 -04:00
distributed [Bugfix] Fix unable to load some models (#10312) 2024-11-14 16:55:54 -08:00
encoder_decoder [Encoder Decoder] Update Mllama to run with both FlashAttention and XFormers (#9982) 2024-11-12 10:53:57 -08:00
engine [Misc] Consolidate pooler config overrides (#10351) 2024-11-15 06:59:00 +00:00
entrypoints [Frontend] Automatic detection of chat content format from AST (#9919) 2024-11-16 13:35:40 +08:00
fp8_kv Enable scaled FP8 (e4m3fn) KV cache on ROCm (AMD GPU) (#3290) 2024-04-03 14:15:55 -07:00
kernels [Bugfix] Marlin 2:4 temp fix for large M dim (>256) (#10464) 2024-11-19 19:40:33 -08:00
lora [LoRA] Adds support for bias in LoRA (#5733) 2024-11-12 11:08:40 -08:00
metrics [Frontend] Add max_tokens prometheus metric (#9881) 2024-11-04 22:53:24 +00:00
model_executor [6/N] torch.compile rollout to users (#10437) 2024-11-19 10:09:03 -08:00
models [VLM] Report multi_modal_placeholders in output (#10407) 2024-11-18 16:06:16 +08:00
mq_llm_engine [Bugfix][core] replace heartbeat with pid check (#9818) 2024-10-30 09:34:07 -07:00
multi_step [Core] Deprecating block manager v1 and make block manager v2 default (#8704) 2024-10-17 11:38:15 -05:00
multimodal [1/N] Initial prototype for multi-modal processor (#10044) 2024-11-13 12:39:03 +00:00
plugins/vllm_add_dummy_model [Model] VLM2Vec, the first multimodal embedding model in vLLM (#9303) 2024-10-16 14:31:00 +08:00
prefix_caching [Frontend] Add per-request number of cached token stats (#10174) 2024-11-12 16:42:28 +00:00
prompt_adapter [CORE] Adding support for insertion of soft-tuned prompts (#4645) 2024-07-09 13:26:36 -07:00
prompts [BugFix] Fix input positions for long context with sliding window (#2088) 2023-12-13 12:28:13 -08:00
quantization [Hardware][XPU] AWQ/GPTQ support for xpu backend (#10107) 2024-11-18 11:18:05 -07:00
samplers [CI/Build] drop support for Python 3.8 EOL (#8464) 2024-11-06 07:11:55 +00:00
spec_decode Disable spec-decode + chunked-prefill for draft models with tensor parallelism > 1 (#10136) 2024-11-08 15:56:18 +00:00
tensorizer_loader [Misc] Fix import error in tensorizer tests and cleanup some code (#10349) 2024-11-15 09:34:17 +00:00
tokenization [CI/Build] drop support for Python 3.8 EOL (#8464) 2024-11-06 07:11:55 +00:00
tool_use [Frontend] Pythonic tool parser (#9859) 2024-11-14 04:14:34 +00:00
tpu [6/N] torch.compile rollout to users (#10437) 2024-11-19 10:09:03 -08:00
tracing [BugFix] Prevent exporting duplicate OpenTelemetry spans (#9017) 2024-10-22 11:11:53 -07:00
v1 [1/N] Initial prototype for multi-modal processor (#10044) 2024-11-13 12:39:03 +00:00
weight_loading [Model][Quantization] HQQ support through Marlin kernel expansion (#9766) 2024-11-19 13:31:12 -08:00
worker [2/N] executor pass the complete config to worker/modelrunner (#9938) 2024-11-02 07:35:05 -07:00
__init__.py [Small] Formatter only checks lints in changed files (#1528) 2023-10-31 15:39:38 -07:00
conftest.py [Model] Adding Support for Qwen2VL as an Embedding Model. Using MrLight/dse-qwen2-2b-mrl-v1 (#9944) 2024-11-13 08:28:13 +00:00
test_cache_block_hashing.py [Core] Make encoder-decoder inputs a nested structure to be more composable (#9604) 2024-11-05 10:07:31 +08:00
test_config.py [Misc] Consolidate pooler config overrides (#10351) 2024-11-15 06:59:00 +00:00
test_embedded_commit.py [CI/Build] use setuptools-scm to set __version__ (#4738) 2024-09-23 09:44:26 -07:00
test_inputs.py [Core][Frontend] Add Support for Inference Time mm_processor_kwargs (#9131) 2024-10-08 14:12:56 +00:00
test_logger.py Rename vllm.logging to vllm.logging_utils (#10134) 2024-11-08 20:53:24 +00:00
test_logits_processor.py [Core] Factor out common code in SequenceData and Sequence (#8675) 2024-09-21 02:30:39 +00:00
test_regression.py Bugfix: fix broken of download models from modelscope (#5233) 2024-06-06 09:28:10 -07:00
test_sampling_params.py [Bugfix] fix crash if max_tokens=None (#2570) 2024-01-23 22:38:55 -08:00
test_scalartype.py [Bugfix] Fix support for dimension like integers and ScalarType (#9299) 2024-10-17 19:08:34 +00:00
test_sequence.py [Core] Factor out common code in SequenceData and Sequence (#8675) 2024-09-21 02:30:39 +00:00
test_sharded_state_loader.py [CI/Build] Replaced some models on tests for smaller ones (#9570) 2024-10-22 04:52:14 +00:00
test_utils.py [Bugfix] Fix load config when using bools (#9533) 2024-10-27 13:46:41 -04:00
utils.py Adds method to read the pooling types from model's files (#9506) 2024-11-07 08:42:40 +00:00