[Misc] Remove user-facing error for removed VLM args (#9104)

This commit is contained in:
Cyrus Leung 2024-10-06 16:33:52 +08:00 committed by GitHub
parent 168cab6bbf
commit f22619fe96
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
2 changed files with 1 additions and 13 deletions

View File

@ -23,10 +23,6 @@ The :class:`~vllm.LLM` class can be instantiated in much the same way as languag
llm = LLM(model="llava-hf/llava-1.5-7b-hf")
.. note::
We have removed all vision language related CLI args in the ``0.5.1`` release. **This is a breaking change**, so please update your code to follow
the above snippet. Specifically, ``image_feature_size`` can no longer be specified as we now calculate that internally for each model.
To pass an image to the model, note the following in :class:`vllm.inputs.PromptType`:
* ``prompt``: The prompt should follow the format that is documented on HuggingFace.

View File

@ -180,15 +180,7 @@ class LLM:
if "disable_log_stats" not in kwargs:
kwargs["disable_log_stats"] = True
removed_vision_keys = (
"image_token_id",
"image_feature_size",
"image_input_shape",
"image_input_type",
)
if any(k in kwargs for k in removed_vision_keys):
raise TypeError(
"There is no need to pass vision-related arguments anymore.")
engine_args = EngineArgs(
model=model,
tokenizer=tokenizer,