vllm/docs/source
Maximilien de Bayser 344cd2b6f4
[Feature] Add support for Llama 3.1 and 3.2 tool use (#8343)
Signed-off-by: Max de Bayser <mbayser@br.ibm.com>
2024-09-26 17:01:42 -07:00
..
_static [Docs] Add RunLLM chat widget (#6857) 2024-07-27 09:24:46 -07:00
_templates/sections [Doc] Guide for adding multi-modal plugins (#6205) 2024-07-10 14:55:34 +08:00
assets [Doc] add visualization for multi-stage dockerfile (#4456) 2024-04-30 17:41:59 +00:00
automatic_prefix_caching [Doc] Add an automatic prefix caching section in vllm documentation (#5324) 2024-06-11 10:24:59 -07:00
community Add NVIDIA Meetup slides, announce AMD meetup, and add contact info (#8319) 2024-09-09 23:21:00 -07:00
dev Revert "rename PromptInputs and inputs with backward compatibility (#8760) (#8810) 2024-09-25 10:36:26 -07:00
getting_started [misc][installation] build from source without compilation (#8818) 2024-09-26 13:19:04 -07:00
models [Misc] Update config loading for Qwen2-VL and remove Granite (#8837) 2024-09-26 07:45:30 -07:00
performance_benchmark [Doc] fix 404 link (#7966) 2024-08-28 13:54:23 -07:00
quantization [[Misc]Upgrade bitsandbytes to the latest version 0.44.0 (#8768) 2024-09-24 17:08:55 -07:00
serving [Feature] Add support for Llama 3.1 and 3.2 tool use (#8343) 2024-09-26 17:01:42 -07:00
conf.py [model] Support for Llava-Next-Video model (#7559) 2024-09-10 22:21:36 -07:00
generate_examples.py Add example scripts to documentation (#4225) 2024-04-22 16:36:54 +00:00
index.rst [Doc] neuron documentation update (#8671) 2024-09-20 15:04:37 -07:00