Go to file
2023-06-18 17:33:50 +08:00
benchmarks Change the name to vLLM (#150) 2023-06-17 03:07:40 -07:00
csrc Change the name to vLLM (#150) 2023-06-17 03:07:40 -07:00
docs Add quickstart guide (#148) 2023-06-18 01:26:12 +08:00
examples Change the name to vLLM (#150) 2023-06-17 03:07:40 -07:00
tests/kernels Change the name to vLLM (#150) 2023-06-17 03:07:40 -07:00
vllm Reduce GPU memory utilization to make sure OOM doesn't happen (#153) 2023-06-18 17:33:50 +08:00
.gitignore [PyPI] Packaging for PyPI distribution (#140) 2023-06-05 20:03:14 -07:00
.readthedocs.yaml Add .readthedocs.yaml (#136) 2023-06-02 22:27:44 -07:00
CONTRIBUTING.md Change the name to vLLM (#150) 2023-06-17 03:07:40 -07:00
LICENSE Add Apache-2.0 license (#102) 2023-05-14 18:05:19 -07:00
MANIFEST.in [PyPI] Packaging for PyPI distribution (#140) 2023-06-05 20:03:14 -07:00
mypy.ini Change the name to vLLM (#150) 2023-06-17 03:07:40 -07:00
pyproject.toml [PyPI] Packaging for PyPI distribution (#140) 2023-06-05 20:03:14 -07:00
README.md Change the name to vLLM (#150) 2023-06-17 03:07:40 -07:00
requirements-dev.txt Add contributing guideline and mypy config (#122) 2023-05-23 17:58:51 -07:00
requirements.txt OpenAI Compatible Frontend (#116) 2023-05-23 21:39:50 -07:00
setup.py Change the name to vLLM (#150) 2023-06-17 03:07:40 -07:00

vLLM

Build from source

pip install -r requirements.txt
pip install -e .  # This may take several minutes.

Test simple server

# Single-GPU inference.
python examples/simple_server.py # --model <your_model>

# Multi-GPU inference (e.g., 2 GPUs).
ray start --head
python examples/simple_server.py -tp 2 # --model <your_model>

The detailed arguments for simple_server.py can be found by:

python examples/simple_server.py --help

FastAPI server

To start the server:

ray start --head
python -m vllm.entrypoints.fastapi_server # --model <your_model>

To test the server:

python test_cli_client.py

Gradio web server

Install the following additional dependencies:

pip install gradio

Start the server:

python -m vllm.http_frontend.fastapi_frontend
# At another terminal
python -m vllm.http_frontend.gradio_webserver

Load LLaMA weights

Since LLaMA weight is not fully public, we cannot directly download the LLaMA weights from huggingface. Therefore, you need to follow the following process to load the LLaMA weights.

  1. Converting LLaMA weights to huggingface format with this script.
    python src/transformers/models/llama/convert_llama_weights_to_hf.py \
        --input_dir /path/to/downloaded/llama/weights --model_size 7B --output_dir /output/path/llama-7b
    
  2. For all the commands above, specify the model with --model /output/path/llama-7b to load the model. For example:
    python simple_server.py --model /output/path/llama-7b
    python -m vllm.http_frontend.fastapi_frontend --model /output/path/llama-7b