This website requires JavaScript.
Explore
Help
Register
Sign In
squall
/
vllm
Watch
1
Star
0
Fork
0
You've already forked vllm
Code
Issues
Pull Requests
Actions
1
Packages
Projects
Releases
Wiki
Activity
e3470f8753
vllm
/
docs
/
source
History
Simon Mo
e941f88584
[Docs] Add acknowledgment for sponsors (
#4925
)
2024-05-21 00:17:25 -07:00
..
assets
[Doc] add visualization for multi-stage dockerfile (
#4456
)
2024-04-30 17:41:59 +00:00
community
[Docs] Add acknowledgment for sponsors (
#4925
)
2024-05-21 00:17:25 -07:00
dev
[Doc] Add API reference for offline inference (
#4710
)
2024-05-13 17:47:42 -07:00
getting_started
Unable to find Punica extension issue during source code installation (
#4494
)
2024-05-01 00:42:09 +00:00
models
[Model] Add Phi-2 LoRA support (
#4886
)
2024-05-21 14:24:17 +09:00
offline_inference
[Doc] Add API reference for offline inference (
#4710
)
2024-05-13 17:47:42 -07:00
quantization
Enable scaled FP8 (e4m3fn) KV cache on ROCm (AMD GPU) (
#3290
)
2024-04-03 14:15:55 -07:00
serving
Support to serve vLLM on Kubernetes with LWS (
#4829
)
2024-05-16 16:37:29 -07:00
conf.py
[CI] Disable non-lazy string operation on logging (
#4326
)
2024-04-26 00:16:58 -07:00
generate_examples.py
Add example scripts to documentation (
#4225
)
2024-04-22 16:36:54 +00:00
index.rst
[Docs] Add acknowledgment for sponsors (
#4925
)
2024-05-21 00:17:25 -07:00