vllm/docs/source
Kameshwara Pavan Kumar Mantha 22b39e11f2
llama_index serving integration documentation (#6973)
Co-authored-by: pavanmantha <pavan.mantha@thevaslabs.io>
2024-08-14 15:38:37 -07:00
..
_static [Docs] Add RunLLM chat widget (#6857) 2024-07-27 09:24:46 -07:00
_templates/sections [Doc] Guide for adding multi-modal plugins (#6205) 2024-07-10 14:55:34 +08:00
assets [Doc] add visualization for multi-stage dockerfile (#4456) 2024-04-30 17:41:59 +00:00
automatic_prefix_caching [Doc] Add an automatic prefix caching section in vllm documentation (#5324) 2024-06-11 10:24:59 -07:00
community Add Skywork AI as Sponsor (#7314) 2024-08-08 13:59:57 -07:00
dev [VLM][Core] Support profiling with multiple multi-modal inputs per prompt (#7126) 2024-08-14 17:55:42 +00:00
getting_started [doc] update test script to include cudagraph (#7501) 2024-08-13 21:52:58 -07:00
models [VLM][Core] Support profiling with multiple multi-modal inputs per prompt (#7126) 2024-08-14 17:55:42 +00:00
performance_benchmark [Doc] Add documentations for nightly benchmarks (#6412) 2024-07-25 11:57:16 -07:00
quantization Revert "[Doc] Update supported_hardware.rst (#7276)" (#7467) 2024-08-13 01:37:08 -07:00
serving llama_index serving integration documentation (#6973) 2024-08-14 15:38:37 -07:00
conf.py [Bugfix][Docs] Update list of mock imports (#7493) 2024-08-13 20:37:30 -07:00
generate_examples.py Add example scripts to documentation (#4225) 2024-04-22 16:36:54 +00:00
index.rst [Docs] Update readme (#7316) 2024-08-11 17:13:37 -07:00