vllm/docs
Chen Zhang 770ec6024f
[Model] Add support for the multi-modal Llama 3.2 model (#8811)
Co-authored-by: simon-mo <xmo@berkeley.edu>
Co-authored-by: Chang Su <chang.s.su@oracle.com>
Co-authored-by: Simon Mo <simon.mo@hey.com>
Co-authored-by: Roger Wang <136131678+ywang96@users.noreply.github.com>
Co-authored-by: Roger Wang <ywang@roblox.com>
2024-09-25 13:29:32 -07:00
..
source [Model] Add support for the multi-modal Llama 3.2 model (#8811) 2024-09-25 13:29:32 -07:00
make.bat Add initial sphinx docs (#120) 2023-05-22 17:02:44 -07:00
Makefile Add initial sphinx docs (#120) 2023-05-22 17:02:44 -07:00
README.md Update README.md (#306) 2023-06-29 06:52:15 -07:00
requirements-docs.txt [Core] Support load and unload LoRA in api server (#6566) 2024-09-05 18:10:33 -07:00

vLLM documents

Build the docs

# Install dependencies.
pip install -r requirements-docs.txt

# Build the docs.
make clean
make html

Open the docs with your browser

python -m http.server -d build/html/

Launch your browser and open localhost:8000.