This website requires JavaScript.
Explore
Help
Register
Sign In
squall
/
vllm
Watch
1
Star
0
Fork
0
You've already forked vllm
Code
Issues
Pull Requests
Actions
1
Packages
Projects
Releases
Wiki
Activity
eb6c50cdc2
vllm
/
docs
/
source
History
Cyrus Leung
5ae5ed1e60
[Core] Consolidate prompt arguments to LLM engines (
#4328
)
...
Co-authored-by: Roger Wang <ywang@roblox.com>
2024-05-28 13:29:31 -07:00
..
assets
[Doc] add visualization for multi-stage dockerfile (
#4456
)
2024-04-30 17:41:59 +00:00
community
[Docs] Add Dropbox as sponsors (
#5089
)
2024-05-28 10:29:09 -07:00
dev
[Core] Consolidate prompt arguments to LLM engines (
#4328
)
2024-05-28 13:29:31 -07:00
getting_started
[Doc] add ccache guide in doc (
#5012
)
2024-05-23 23:21:54 +00:00
models
[Kernel][Backend][Model] Blocksparse flash attention kernel and Phi-3-Small model (
#4799
)
2024-05-24 22:00:52 -07:00
quantization
Enable scaled FP8 (e4m3fn) KV cache on ROCm (AMD GPU) (
#3290
)
2024-04-03 14:15:55 -07:00
serving
[Core] Consolidate prompt arguments to LLM engines (
#4328
)
2024-05-28 13:29:31 -07:00
conf.py
[CI] Disable non-lazy string operation on logging (
#4326
)
2024-04-26 00:16:58 -07:00
generate_examples.py
Add example scripts to documentation (
#4225
)
2024-04-22 16:36:54 +00:00
index.rst
[Core] Consolidate prompt arguments to LLM engines (
#4328
)
2024-05-28 13:29:31 -07:00