vllm/docs/source
jvmncs 8f36444c4f
multi-LoRA as extra models in OpenAI server (#2775)
how to serve the loras (mimicking the [multilora inference example](https://github.com/vllm-project/vllm/blob/main/examples/multilora_inference.py)):
```terminal
$ export LORA_PATH=~/.cache/huggingface/hub/models--yard1--llama-2-7b-sql-lora-test/
$ python -m vllm.entrypoints.api_server \
 --model meta-llama/Llama-2-7b-hf \
 --enable-lora \
 --lora-modules sql-lora=$LORA_PATH sql-lora2=$LORA_PATH
```
the above server will list 3 separate values if the user queries `/models`: one for the base served model, and one each for the specified lora modules. in this case sql-lora and sql-lora2 point to the same underlying lora, but this need not be the case. lora config values take the same values they do in EngineArgs

no work has been done here to scope client permissions to specific models
2024-02-17 12:00:48 -08:00
..
assets/logos Update README.md (#1292) 2023-10-08 23:15:50 -07:00
dev/engine [DOC] Add additional comments for LLMEngine and AsyncLLMEngine (#1011) 2024-01-11 19:26:49 -08:00
getting_started [ROCm] support Radeon™ 7900 series (gfx1100) without using flash-attention (#2768) 2024-02-10 23:14:37 -08:00
models multi-LoRA as extra models in OpenAI server (#2775) 2024-02-17 12:00:48 -08:00
quantization [CI] Ensure documentation build is checked in CI (#2842) 2024-02-12 22:53:07 -08:00
serving docs: fix langchain (#2736) 2024-02-03 18:17:55 -08:00
conf.py [CI] Ensure documentation build is checked in CI (#2842) 2024-02-12 22:53:07 -08:00
index.rst [CI] Ensure documentation build is checked in CI (#2842) 2024-02-12 22:53:07 -08:00