vllm/docs/source/models
Yangshen⚡Deng 6a512a00df
[model] Support for Llava-Next-Video model (#7559)
Co-authored-by: Roger Wang <ywang@roblox.com>
Co-authored-by: Cyrus Leung <tlleungac@connect.ust.hk>
Co-authored-by: Cyrus Leung <cyrus.tl.leung@gmail.com>
2024-09-10 22:21:36 -07:00
..
adding_model.rst [Doc][CI/Build] Update docs and tests to use vllm serve (#6431) 2024-07-17 07:43:21 +00:00
enabling_multimodal_inputs.rst [VLM][Core] Support profiling with multiple multi-modal inputs per prompt (#7126) 2024-08-14 17:55:42 +00:00
engine_args.rst [Doc][CI/Build] Update docs and tests to use vllm serve (#6431) 2024-07-17 07:43:21 +00:00
lora.rst [Core] Support load and unload LoRA in api server (#6566) 2024-09-05 18:10:33 -07:00
performance.rst [Scheduler] Warning upon preemption and Swapping (#4647) 2024-05-13 23:50:44 +09:00
spec_decode.rst [Documentation][Spec Decode] Add documentation about lossless guarantees in Speculative Decoding in vLLM (#7962) 2024-09-05 16:25:29 -04:00
supported_models.rst [model] Support for Llava-Next-Video model (#7559) 2024-09-10 22:21:36 -07:00
vlm.rst [Doc] Indicate more information about supported modalities (#8181) 2024-09-05 10:51:53 +00:00