vllm/docs/source
2024-09-12 23:52:41 -07:00
..
_static [Docs] Add RunLLM chat widget (#6857) 2024-07-27 09:24:46 -07:00
_templates/sections [Doc] Guide for adding multi-modal plugins (#6205) 2024-07-10 14:55:34 +08:00
assets [Doc] add visualization for multi-stage dockerfile (#4456) 2024-04-30 17:41:59 +00:00
automatic_prefix_caching [Doc] Add an automatic prefix caching section in vllm documentation (#5324) 2024-06-11 10:24:59 -07:00
community Add NVIDIA Meetup slides, announce AMD meetup, and add contact info (#8319) 2024-09-09 23:21:00 -07:00
dev [misc] [doc] [frontend] LLM torch profiler support (#7943) 2024-09-06 17:48:48 -07:00
getting_started [doc] recommend pip instead of conda (#8446) 2024-09-12 23:52:41 -07:00
models [Model] Support multiple images for qwen-vl (#8247) 2024-09-12 10:10:54 -07:00
performance_benchmark [Doc] fix 404 link (#7966) 2024-08-28 13:54:23 -07:00
quantization [Misc] Remove SqueezeLLM (#8220) 2024-09-06 16:29:03 -06:00
serving [Documentation][Spec Decode] Add documentation about lossless guarantees in Speculative Decoding in vLLM (#7962) 2024-09-05 16:25:29 -04:00
conf.py [model] Support for Llava-Next-Video model (#7559) 2024-09-10 22:21:36 -07:00
generate_examples.py Add example scripts to documentation (#4225) 2024-04-22 16:36:54 +00:00
index.rst [misc] Add Torch profiler support (#7451) 2024-08-21 15:39:26 -07:00