vllm/docs
2024-10-30 18:15:56 -07:00
..
source [Misc][OpenAI] deprecate max_tokens in favor of new max_completion_tokens field for chat completion endpoint (#9837) 2024-10-30 18:15:56 -07:00
make.bat Add initial sphinx docs (#120) 2023-05-22 17:02:44 -07:00
Makefile Add initial sphinx docs (#120) 2023-05-22 17:02:44 -07:00
README.md Update README.md (#306) 2023-06-29 06:52:15 -07:00
requirements-docs.txt [misc] improve model support check in another process (#9208) 2024-10-09 21:58:27 -07:00

vLLM documents

Build the docs

# Install dependencies.
pip install -r requirements-docs.txt

# Build the docs.
make clean
make html

Open the docs with your browser

python -m http.server -d build/html/

Launch your browser and open localhost:8000.