[Doc] Update VLM doc about loading from local files (#9999)

Signed-off-by: Roger Wang <ywang@roblox.com>
This commit is contained in:
Roger Wang 2024-11-04 11:47:11 -08:00 committed by GitHub
parent 5208dc7a20
commit 6e056bcf04
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194

View File

@ -242,6 +242,10 @@ To consume the server, you can use the OpenAI client like in the example below:
A full code example can be found in `examples/openai_chat_completion_client_for_multimodal.py <https://github.com/vllm-project/vllm/blob/main/examples/openai_chat_completion_client_for_multimodal.py>`_. A full code example can be found in `examples/openai_chat_completion_client_for_multimodal.py <https://github.com/vllm-project/vllm/blob/main/examples/openai_chat_completion_client_for_multimodal.py>`_.
.. tip::
Loading from local file paths is also supported on vLLM: You can specify the allowed local media path via ``--allowed-local-media-path`` when launching the API server/engine,
and pass the file path as ``url`` in the API request.
.. tip:: .. tip::
There is no need to place image placeholders in the text content of the API request - they are already represented by the image content. There is no need to place image placeholders in the text content of the API request - they are already represented by the image content.
In fact, you can place image placeholders in the middle of the text by interleaving text and image content. In fact, you can place image placeholders in the middle of the text by interleaving text and image content.