vllm/tests
Charlie Fu 59449095ab
[Performance][Kernel] Fused_moe Performance Improvement (#9384)
Signed-off-by: charlifu <charlifu@amd.com>
2024-10-24 15:37:52 -07:00
..
async_engine [MISC] Consolidate cleanup() and refactor offline_inference_with_prefix.py (#9510) 2024-10-18 14:30:55 -07:00
basic_correctness [CI/Build] Replaced some models on tests for smaller ones (#9570) 2024-10-22 04:52:14 +00:00
compile [CI/Build] Replaced some models on tests for smaller ones (#9570) 2024-10-22 04:52:14 +00:00
core [core] simplify seq group code (#9569) 2024-10-24 00:16:44 -07:00
data [Bugfix] Fix order of arguments matters in config.yaml (#8960) 2024-10-05 17:35:11 +00:00
distributed [torch.compile] Adding torch compile annotations to some models (#9641) 2024-10-24 09:31:42 -07:00
encoder_decoder [Hardware][CPU] using current_platform.is_cpu (#9536) 2024-10-22 00:50:43 -07:00
engine [Frontend] [Neuron] Parse literals out of override-neuron-config (#8959) 2024-10-03 18:02:07 +00:00
entrypoints [Bugfix]: Make chat content text allow type content (#9358) 2024-10-24 05:05:49 +00:00
fp8_kv Enable scaled FP8 (e4m3fn) KV cache on ROCm (AMD GPU) (#3290) 2024-04-03 14:15:55 -07:00
kernels [Performance][Kernel] Fused_moe Performance Improvement (#9384) 2024-10-24 15:37:52 -07:00
lora [CI/Build][LoRA] Temporarily fix long context failure issue (#9579) 2024-10-22 11:32:51 +00:00
metrics [BugFix] Fix metrics error for --num-scheduler-steps > 1 (#8234) 2024-10-22 15:43:03 -07:00
model_executor [torch.compile] Fine-grained CustomOp enabling mechanism (#9300) 2024-10-17 18:36:37 +00:00
models [Model] Compute Llava Next Max Tokens / Dummy Data From Gridpoints (#9650) 2024-10-24 10:42:24 -07:00
mq_llm_engine [Frontend] Don't log duplicate error stacktrace for every request in the batch (#9023) 2024-10-21 14:49:41 -07:00
multi_step [Core] Deprecating block manager v1 and make block manager v2 default (#8704) 2024-10-17 11:38:15 -05:00
multimodal [Model] Add user-configurable task for models that support both generation and embedding (#9424) 2024-10-18 11:31:58 -07:00
plugins/vllm_add_dummy_model [Model] VLM2Vec, the first multimodal embedding model in vLLM (#9303) 2024-10-16 14:31:00 +08:00
prefix_caching [MISC] Consolidate cleanup() and refactor offline_inference_with_prefix.py (#9510) 2024-10-18 14:30:55 -07:00
prompt_adapter [CORE] Adding support for insertion of soft-tuned prompts (#4645) 2024-07-09 13:26:36 -07:00
prompts [BugFix] Fix input positions for long context with sliding window (#2088) 2023-12-13 12:28:13 -08:00
quantization 🐛 fix torch memory profiling (#9516) 2024-10-18 21:25:19 -04:00
samplers [core] remove beam search from the core (#9105) 2024-10-07 05:47:04 +00:00
spec_decode [MISC] Consolidate cleanup() and refactor offline_inference_with_prefix.py (#9510) 2024-10-18 14:30:55 -07:00
tensorizer_loader [MISC] Consolidate cleanup() and refactor offline_inference_with_prefix.py (#9510) 2024-10-18 14:30:55 -07:00
tokenization [Core] Allow specifying custom Executor (#6557) 2024-07-20 01:25:06 +00:00
tool_use [Frontend][Feature] Add jamba tool parser (#9154) 2024-10-18 10:27:48 +00:00
tpu [torch.compile] integration with compilation control (#9058) 2024-10-10 12:39:36 -07:00
tracing [BugFix] Prevent exporting duplicate OpenTelemetry spans (#9017) 2024-10-22 11:11:53 -07:00
weight_loading [Bugfix] Fix Weight Loading Multiple GPU Test - Large Models (#9213) 2024-10-10 14:15:40 +08:00
worker [Hardware][CPU] using current_platform.is_cpu (#9536) 2024-10-22 00:50:43 -07:00
__init__.py [Small] Formatter only checks lints in changed files (#1528) 2023-10-31 15:39:38 -07:00
conftest.py [CI/Build] Fix VLM test failures when using transformers v4.46 (#9666) 2024-10-25 01:40:40 +08:00
test_cache_block_hashing.py [CI/Build] Update Ruff version (#8469) 2024-09-18 11:00:56 +00:00
test_config.py [Model] Add user-configurable task for models that support both generation and embedding (#9424) 2024-10-18 11:31:58 -07:00
test_embedded_commit.py [CI/Build] use setuptools-scm to set __version__ (#4738) 2024-09-23 09:44:26 -07:00
test_inputs.py [Core][Frontend] Add Support for Inference Time mm_processor_kwargs (#9131) 2024-10-08 14:12:56 +00:00
test_logger.py [CI/Build] Update Ruff version (#8469) 2024-09-18 11:00:56 +00:00
test_logits_processor.py [Core] Factor out common code in SequenceData and Sequence (#8675) 2024-09-21 02:30:39 +00:00
test_regression.py Bugfix: fix broken of download models from modelscope (#5233) 2024-06-06 09:28:10 -07:00
test_sampling_params.py [Bugfix] fix crash if max_tokens=None (#2570) 2024-01-23 22:38:55 -08:00
test_scalartype.py [Bugfix] Fix support for dimension like integers and ScalarType (#9299) 2024-10-17 19:08:34 +00:00
test_sequence.py [Core] Factor out common code in SequenceData and Sequence (#8675) 2024-09-21 02:30:39 +00:00
test_sharded_state_loader.py [CI/Build] Replaced some models on tests for smaller ones (#9570) 2024-10-22 04:52:14 +00:00
test_utils.py [Model] Add user-configurable task for models that support both generation and embedding (#9424) 2024-10-18 11:31:58 -07:00
utils.py [CI/Build] Remove unnecessary fork_new_process (#9484) 2024-10-21 19:47:29 -07:00