vllm/docs/source
Murali Andoorveedu fc912e0886
[Models] Support Qwen model with PP (#6974)
Signed-off-by: Muralidhar Andoorveedu <muralidhar.andoorveedu@centml.ai>
2024-08-01 12:40:43 -07:00
..
_static [Docs] Add RunLLM chat widget (#6857) 2024-07-27 09:24:46 -07:00
_templates/sections [Doc] Guide for adding multi-modal plugins (#6205) 2024-07-10 14:55:34 +08:00
assets [Doc] add visualization for multi-stage dockerfile (#4456) 2024-04-30 17:41:59 +00:00
automatic_prefix_caching [Doc] Add an automatic prefix caching section in vllm documentation (#5324) 2024-06-11 10:24:59 -07:00
community [Docs] Publish 5th meetup slides (#6799) 2024-07-25 16:47:55 -07:00
dev [Bugfix] Fix broadcasting logic for multi_modal_kwargs (#6836) 2024-07-31 10:38:45 +08:00
getting_started [Kernel][RFC] Refactor the punica kernel based on Triton (#5036) 2024-07-31 17:12:24 -07:00
models [Bugfix] Clean up MiniCPM-V (#6939) 2024-07-31 14:39:19 +00:00
performance_benchmark [Doc] Add documentations for nightly benchmarks (#6412) 2024-07-25 11:57:16 -07:00
quantization [bitsandbytes]: support read bnb pre-quantized model (#5753) 2024-07-23 23:45:09 +00:00
serving [Models] Support Qwen model with PP (#6974) 2024-08-01 12:40:43 -07:00
conf.py [Docs] Add RunLLM chat widget (#6857) 2024-07-27 09:24:46 -07:00
generate_examples.py Add example scripts to documentation (#4225) 2024-04-22 16:36:54 +00:00
index.rst [Doc] Add documentations for nightly benchmarks (#6412) 2024-07-25 11:57:16 -07:00