vllm/docs/source
Alexander Matveev 7c7714d856
[Core][Bugfix][Perf] Introduce MQLLMEngine to avoid asyncio OH (#8157)
Co-authored-by: Nick Hill <nickhill@us.ibm.com>
Co-authored-by: rshaw@neuralmagic.com <rshaw@neuralmagic.com>
Co-authored-by: Robert Shaw <114415538+robertgshaw2-neuralmagic@users.noreply.github.com>
Co-authored-by: Simon Mo <simon.mo@hey.com>
2024-09-18 13:56:58 +00:00
..
_static [Docs] Add RunLLM chat widget (#6857) 2024-07-27 09:24:46 -07:00
_templates/sections [Doc] Guide for adding multi-modal plugins (#6205) 2024-07-10 14:55:34 +08:00
assets [Doc] add visualization for multi-stage dockerfile (#4456) 2024-04-30 17:41:59 +00:00
automatic_prefix_caching [Doc] Add an automatic prefix caching section in vllm documentation (#5324) 2024-06-11 10:24:59 -07:00
community Add NVIDIA Meetup slides, announce AMD meetup, and add contact info (#8319) 2024-09-09 23:21:00 -07:00
dev [Core][Bugfix][Perf] Introduce MQLLMEngine to avoid asyncio OH (#8157) 2024-09-18 13:56:58 +00:00
getting_started [doc] improve installation doc (#8550) 2024-09-17 16:24:06 -07:00
models [Model] support minicpm3 (#8297) 2024-09-14 14:50:26 +00:00
performance_benchmark [Doc] fix 404 link (#7966) 2024-08-28 13:54:23 -07:00
quantization [Misc] Remove SqueezeLLM (#8220) 2024-09-06 16:29:03 -06:00
serving [Documentation][Spec Decode] Add documentation about lossless guarantees in Speculative Decoding in vLLM (#7962) 2024-09-05 16:25:29 -04:00
conf.py [model] Support for Llava-Next-Video model (#7559) 2024-09-10 22:21:36 -07:00
generate_examples.py Add example scripts to documentation (#4225) 2024-04-22 16:36:54 +00:00
index.rst [misc] Add Torch profiler support (#7451) 2024-08-21 15:39:26 -07:00