vllm/docs/source
Rafael Vasquez d7263a1bb8
Doc: Improve benchmark documentation (#9927)
Signed-off-by: Rafael Vasquez <rafvasq21@gmail.com>
2024-11-06 23:50:35 -08:00
..
_static [Docs] Add RunLLM chat widget (#6857) 2024-07-27 09:24:46 -07:00
_templates/sections [Doc] Guide for adding multi-modal plugins (#6205) 2024-07-10 14:55:34 +08:00
assets [Doc] add visualization for multi-stage dockerfile (#4456) 2024-04-30 17:41:59 +00:00
automatic_prefix_caching [Doc] Add an automatic prefix caching section in vllm documentation (#5324) 2024-06-11 10:24:59 -07:00
community Add NVIDIA Meetup slides, announce AMD meetup, and add contact info (#8319) 2024-09-09 23:21:00 -07:00
dev Doc: Improve benchmark documentation (#9927) 2024-11-06 23:50:35 -08:00
getting_started [doc] add back Python 3.8 ABI (#10100) 2024-11-06 21:06:41 -08:00
models [Model][LoRA]LoRA support added for Qwen2VLForConditionalGeneration (#10022) 2024-11-06 14:13:15 +00:00
performance Doc: Improve benchmark documentation (#9927) 2024-11-06 23:50:35 -08:00
quantization [Hardware][CPU] Support AWQ for CPU backend (#7515) 2024-10-09 10:28:08 -06:00
serving [Misc] Consolidate ModelConfig code related to HF config (#10104) 2024-11-07 06:00:21 +00:00
conf.py [Frontend] Chat-based Embeddings API (#9759) 2024-11-01 08:13:35 +00:00
generate_examples.py Add example scripts to documentation (#4225) 2024-04-22 16:36:54 +00:00
index.rst Doc: Improve benchmark documentation (#9927) 2024-11-06 23:50:35 -08:00