This website requires JavaScript.
Explore
Help
Register
Sign In
squall
/
vllm
Watch
1
Star
0
Fork
0
You've already forked vllm
Code
Issues
Pull Requests
Actions
1
Packages
Projects
Releases
Wiki
Activity
ccb63a8245
vllm
/
docs
/
source
History
Zhuohan Li
c579b750a0
[Doc] Add meetups to the doc (
#4798
)
2024-05-13 18:48:00 -07:00
..
assets
[Doc] add visualization for multi-stage dockerfile (
#4456
)
2024-04-30 17:41:59 +00:00
community
[Doc] Add meetups to the doc (
#4798
)
2024-05-13 18:48:00 -07:00
dev
[Doc] Add API reference for offline inference (
#4710
)
2024-05-13 17:47:42 -07:00
getting_started
Unable to find Punica extension issue during source code installation (
#4494
)
2024-05-01 00:42:09 +00:00
models
[Doc] Shorten README by removing supported model list (
#4796
)
2024-05-13 16:23:54 -07:00
offline_inference
[Doc] Add API reference for offline inference (
#4710
)
2024-05-13 17:47:42 -07:00
quantization
Enable scaled FP8 (e4m3fn) KV cache on ROCm (AMD GPU) (
#3290
)
2024-04-03 14:15:55 -07:00
serving
[Doc] Add API reference for offline inference (
#4710
)
2024-05-13 17:47:42 -07:00
conf.py
[CI] Disable non-lazy string operation on logging (
#4326
)
2024-04-26 00:16:58 -07:00
generate_examples.py
Add example scripts to documentation (
#4225
)
2024-04-22 16:36:54 +00:00
index.rst
[Doc] Add meetups to the doc (
#4798
)
2024-05-13 18:48:00 -07:00