Go to file
Peter Pan 0e088750af
[MISC] Fix invalid escape sequence '\' (#8830)
Signed-off-by: Peter Pan <Peter.Pan@daocloud.io>
2024-09-27 01:13:25 -07:00
.buildkite [BugFix] Fix test breakages from transformers 4.45 upgrade (#8829) 2024-09-26 16:46:43 -07:00
.github [CI/Build] Fix missing ci dependencies (#8834) 2024-09-26 11:04:39 -07:00
benchmarks [MISC] Fix invalid escape sequence '\' (#8830) 2024-09-27 01:13:25 -07:00
cmake [Bugfix] Fix CPU CMake build (#8723) 2024-09-22 20:40:46 -07:00
csrc [Bugfix] Fixup advance_step.cu warning (#8815) 2024-09-26 16:23:45 -07:00
docs [Core] renamePromptInputs and inputs (#8876) 2024-09-26 20:35:15 -07:00
examples [Feature] Add support for Llama 3.1 and 3.2 tool use (#8343) 2024-09-26 17:01:42 -07:00
tests [Core] renamePromptInputs and inputs (#8876) 2024-09-26 20:35:15 -07:00
vllm [Core] renamePromptInputs and inputs (#8876) 2024-09-26 20:35:15 -07:00
.clang-format [CI/Build] Enforce style for C++ and CUDA code with clang-format (#4722) 2024-05-22 07:18:41 +00:00
.dockerignore [CI/Build] Dockerfile.cpu improvements (#7298) 2024-08-08 15:24:52 -04:00
.gitignore [CI/Build] fix setuptools-scm usage (#8771) 2024-09-24 12:38:12 -07:00
.readthedocs.yaml [Doc] Add missing mock import to docs conf.py (#6834) 2024-07-27 04:47:33 +00:00
.yapfignore [issue templates] add some issue templates (#3412) 2024-03-14 13:16:00 -07:00
CMakeLists.txt [Kernel] Split Marlin MoE kernels into multiple files (#8661) 2024-09-24 09:31:42 -07:00
CODE_OF_CONDUCT.md [Doc] [Misc] Create CODE_OF_CONDUCT.md (#8161) 2024-09-04 16:50:13 -07:00
collect_env.py [misc] fix collect env (#8894) 2024-09-27 00:26:38 -07:00
CONTRIBUTING.md [Misc] Define common requirements (#3841) 2024-04-05 00:39:17 -07:00
Dockerfile [Docs] Add README to the build docker image (#8825) 2024-09-26 11:02:52 -07:00
Dockerfile.cpu [CI/Build] use setuptools-scm to set __version__ (#4738) 2024-09-23 09:44:26 -07:00
Dockerfile.neuron [CI/Build] use setuptools-scm to set __version__ (#4738) 2024-09-23 09:44:26 -07:00
Dockerfile.openvino [CI/Build] use setuptools-scm to set __version__ (#4738) 2024-09-23 09:44:26 -07:00
Dockerfile.ppc64le [CI/Build] use setuptools-scm to set __version__ (#4738) 2024-09-23 09:44:26 -07:00
Dockerfile.rocm [CI/Build][Bugfix][Doc][ROCm] CI fix and doc update after ROCm 6.2 upgrade (#8777) 2024-09-25 22:26:37 +08:00
Dockerfile.tpu [CI/Build] use setuptools-scm to set __version__ (#4738) 2024-09-23 09:44:26 -07:00
Dockerfile.xpu [CI/Build] use setuptools-scm to set __version__ (#4738) 2024-09-23 09:44:26 -07:00
format.sh [CI/Build] Update Ruff version (#8469) 2024-09-18 11:00:56 +00:00
LICENSE Add Apache-2.0 license (#102) 2023-05-14 18:05:19 -07:00
MANIFEST.in [Misc] Use ray[adag] dependency instead of cuda (#7938) 2024-09-06 09:18:35 -07:00
pyproject.toml [CI/Build] fix setuptools-scm usage (#8771) 2024-09-24 12:38:12 -07:00
python_only_dev.py [misc][installation] build from source without compilation (#8818) 2024-09-26 13:19:04 -07:00
README.md Add NVIDIA Meetup slides, announce AMD meetup, and add contact info (#8319) 2024-09-09 23:21:00 -07:00
requirements-build.txt [CI/Build] use setuptools-scm to set __version__ (#4738) 2024-09-23 09:44:26 -07:00
requirements-common.txt [Installation] Allow lower versions of FastAPI to maintain Ray 2.9 compatibility (#8764) 2024-09-26 16:54:09 -07:00
requirements-cpu.txt [Hardware][Intel CPU] Update torch 2.4.0 for CPU backend (#6931) 2024-08-02 08:55:58 -07:00
requirements-cuda.txt [Kernel] Build flash-attn from source (#8245) 2024-09-20 23:27:10 -07:00
requirements-dev.txt Seperate dev requirements into lint and test (#5474) 2024-06-13 11:22:41 -07:00
requirements-lint.txt [CI/Build] Update Ruff version (#8469) 2024-09-18 11:00:56 +00:00
requirements-neuron.txt [Hardware][AWS] update neuron to 2.20 (#8676) 2024-09-20 15:19:44 -07:00
requirements-openvino.txt [OpenVINO] migrate to latest dependencies versions (#7251) 2024-08-07 09:49:10 -07:00
requirements-rocm.txt [CI/Build][ROCm] Enabling tensorizer tests for ROCm (#7237) 2024-08-27 10:09:13 -07:00
requirements-test.txt [[Misc]Upgrade bitsandbytes to the latest version 0.44.0 (#8768) 2024-09-24 17:08:55 -07:00
requirements-tpu.txt [TPU] Support single and multi-host TPUs on GKE (#7613) 2024-08-30 00:27:40 -07:00
requirements-xpu.txt [Bugfix] fix docker build for xpu (#8652) 2024-09-22 22:54:18 -07:00
SECURITY.md Create SECURITY.md (#8642) 2024-09-19 12:16:28 -07:00
setup.py [CI/Build] fix setuptools-scm usage (#8771) 2024-09-24 12:38:12 -07:00
use_existing_torch.py [build] enable existing pytorch (for GH200, aarch64, nightly) (#8713) 2024-09-22 12:47:54 -07:00

vLLM

Easy, fast, and cheap LLM serving for everyone

| Documentation | Blog | Paper | Discord | Twitter/X |


vLLM, AMD, Anyscale Meet & Greet at Ray Summit 2024 (Monday, Sept 30th, 5-7pm PT) at Marriott Marquis San Francisco

We are excited to announce our special vLLM event in collaboration with AMD and Anyscale. Join us to learn more about recent advancements of vLLM on MI300X. Register here and be a part of the event!


Latest News 🔥


About

vLLM is a fast and easy-to-use library for LLM inference and serving.

vLLM is fast with:

  • State-of-the-art serving throughput
  • Efficient management of attention key and value memory with PagedAttention
  • Continuous batching of incoming requests
  • Fast model execution with CUDA/HIP graph
  • Quantizations: GPTQ, AWQ, INT4, INT8, and FP8.
  • Optimized CUDA kernels, including integration with FlashAttention and FlashInfer.
  • Speculative decoding
  • Chunked prefill

Performance benchmark: We include a performance benchmark that compares the performance of vLLM against other LLM serving engines (TensorRT-LLM, text-generation-inference and lmdeploy).

vLLM is flexible and easy to use with:

  • Seamless integration with popular Hugging Face models
  • High-throughput serving with various decoding algorithms, including parallel sampling, beam search, and more
  • Tensor parallelism and pipeline parallelism support for distributed inference
  • Streaming outputs
  • OpenAI-compatible API server
  • Support NVIDIA GPUs, AMD CPUs and GPUs, Intel CPUs and GPUs, PowerPC CPUs, TPU, and AWS Neuron.
  • Prefix caching support
  • Multi-lora support

vLLM seamlessly supports most popular open-source models on HuggingFace, including:

  • Transformer-like LLMs (e.g., Llama)
  • Mixture-of-Expert LLMs (e.g., Mixtral)
  • Embedding Models (e.g. E5-Mistral)
  • Multi-modal LLMs (e.g., LLaVA)

Find the full list of supported models here.

Getting Started

Install vLLM with pip or from source:

pip install vllm

Visit our documentation to learn more.

Contributing

We welcome and value any contributions and collaborations. Please check out CONTRIBUTING.md for how to get involved.

Sponsors

vLLM is a community project. Our compute resources for development and testing are supported by the following organizations. Thank you for your support!

  • a16z
  • AMD
  • Anyscale
  • AWS
  • Crusoe Cloud
  • Databricks
  • DeepInfra
  • Dropbox
  • Google Cloud
  • Lambda Lab
  • NVIDIA
  • Replicate
  • Roblox
  • RunPod
  • Sequoia Capital
  • Skywork AI
  • Trainy
  • UC Berkeley
  • UC San Diego
  • ZhenFund

We also have an official fundraising venue through OpenCollective. We plan to use the fund to support the development, maintenance, and adoption of vLLM.

Citation

If you use vLLM for your research, please cite our paper:

@inproceedings{kwon2023efficient,
  title={Efficient Memory Management for Large Language Model Serving with PagedAttention},
  author={Woosuk Kwon and Zhuohan Li and Siyuan Zhuang and Ying Sheng and Lianmin Zheng and Cody Hao Yu and Joseph E. Gonzalez and Hao Zhang and Ion Stoica},
  booktitle={Proceedings of the ACM SIGOPS 29th Symposium on Operating Systems Principles},
  year={2023}
}

Contact Us

  • For technical questions and feature requests, please use Github issues or discussions.
  • For discussing with fellow users, please use Discord.
  • For security disclosures, please use Github's security advisory feature.
  • For collaborations and partnerships, please contact us at vllm-questions AT lists.berkeley.edu.