This website requires JavaScript.
Explore
Help
Register
Sign In
squall
/
vllm
Watch
1
Star
0
Fork
0
You've already forked vllm
Code
Issues
Pull Requests
Actions
1
Packages
Projects
Releases
Wiki
Activity
e23a43aef8
vllm
/
vllm
/
entrypoints
/
openai
History
Thomas Parnell
1d7c940d74
Add option to completion API to truncate prompt tokens (
#3144
)
2024-04-05 10:15:42 -07:00
..
__init__.py
Change the name to vLLM (
#150
)
2023-06-17 03:07:40 -07:00
api_server.py
[Frontend][Bugfix] allow using the default middleware with a root path (
#3788
)
2024-04-02 01:20:28 -07:00
cli_args.py
[Doc] Add docs about OpenAI compatible server (
#3288
)
2024-03-18 22:05:34 -07:00
protocol.py
Add option to completion API to truncate prompt tokens (
#3144
)
2024-04-05 10:15:42 -07:00
serving_chat.py
[Misc] Include matched stop string/token in responses (
#2976
)
2024-03-25 17:31:32 -07:00
serving_completion.py
Add option to completion API to truncate prompt tokens (
#3144
)
2024-04-05 10:15:42 -07:00
serving_engine.py
Add option to completion API to truncate prompt tokens (
#3144
)
2024-04-05 10:15:42 -07:00