This website requires JavaScript.
Explore
Help
Register
Sign In
squall
/
vllm
Watch
1
Star
0
Fork
0
You've already forked vllm
Code
Issues
Pull Requests
Actions
1
Packages
Projects
Releases
Wiki
Activity
4cc24f01b1
vllm
/
vllm
/
transformers_utils
History
Nick Hill
e2fbaee725
[BugFix][Frontend] Use LoRA tokenizer in OpenAI APIs (
#6227
)
...
Co-authored-by: Cyrus Leung <cyrus.tl.leung@gmail.com>
2024-07-18 15:13:30 +08:00
..
configs
[Speculative Decoding] Medusa Implementation with Top-1 proposer (
#4978
)
2024-07-09 18:34:02 -07:00
tokenizer_group
[Bugfix] Use RayActorError for older versions of Ray in RayTokenizerGroupPool (
#6039
)
2024-07-01 20:12:40 +00:00
tokenizers
[Mypy] Part 3 fix typing for nested directories for most of directory (
#4161
)
2024-04-22 21:32:44 -07:00
__init__.py
[Tokenizer] Add an option to specify tokenizer (
#284
)
2023-06-28 09:46:58 -07:00
config.py
[Speculative Decoding] Medusa Implementation with Top-1 proposer (
#4978
)
2024-07-09 18:34:02 -07:00
detokenizer.py
[BugFix][Frontend] Use LoRA tokenizer in OpenAI APIs (
#6227
)
2024-07-18 15:13:30 +08:00
image_processor.py
[Core] Dynamic image size support for VLMs (
#5276
)
2024-07-02 20:34:00 -07:00
tokenizer.py
[BugFix][Frontend] Use LoRA tokenizer in OpenAI APIs (
#6227
)
2024-07-18 15:13:30 +08:00