vllm/tests/models/decoder_only/vision_language
lkchen c7dec926f6
[VLM] Report multi_modal_placeholders in output (#10407)
Signed-off-by: Linkun Chen <lkchen+anyscale@github.com>
2024-11-18 16:06:16 +08:00
..
mm_processor_kwargs [1/N] Initial prototype for multi-modal processor (#10044) 2024-11-13 12:39:03 +00:00
vlm_utils [CI/Build] Fix VLM broadcast tests tensor_parallel_size passing (#10161) 2024-11-09 04:02:59 +00:00
__init__.py [CI/Build] Reorganize models tests (#7820) 2024-09-13 10:20:06 -07:00
test_awq.py [CI/Build] Split up models tests (#10069) 2024-11-09 11:39:14 -08:00
test_h2ovl.py [CI/Build] Update CPU tests to include all "standard" tests (#5481) 2024-11-08 23:30:04 +08:00
test_intern_vit.py [CI/Build] Split up models tests (#10069) 2024-11-09 11:39:14 -08:00
test_models.py [CI/Build] Split up models tests (#10069) 2024-11-09 11:39:14 -08:00
test_phi3v.py [CI/Build] Update CPU tests to include all "standard" tests (#5481) 2024-11-08 23:30:04 +08:00
test_pixtral.py [VLM] Report multi_modal_placeholders in output (#10407) 2024-11-18 16:06:16 +08:00
test_qwen2_vl.py [Bugfix] Fix M-RoPE position calculation when chunked prefill is enabled (#10388) 2024-11-17 02:10:00 +08:00