This website requires JavaScript.
Explore
Help
Register
Sign In
squall
/
vllm
Watch
1
Star
0
Fork
0
You've already forked vllm
Code
Issues
Pull Requests
Actions
1
Packages
Projects
Releases
Wiki
Activity
954f7305a1
vllm
/
vllm
/
transformers_utils
History
Woosuk Kwon
6ce01f3066
[Performance] Optimize
get_seqs
(
#7051
)
2024-08-01 18:29:52 -07:00
..
configs
[Model] Initialize support for InternVL2 series models (
#6514
)
2024-07-29 10:16:30 +00:00
tokenizer_group
[mypy] Enable following imports for some directories (
#6681
)
2024-07-31 10:38:03 +08:00
tokenizers
[Mypy] Part 3 fix typing for nested directories for most of directory (
#4161
)
2024-04-22 21:32:44 -07:00
__init__.py
[Tokenizer] Add an option to specify tokenizer (
#284
)
2023-06-28 09:46:58 -07:00
config.py
[Model] Initialize support for InternVL2 series models (
#6514
)
2024-07-29 10:16:30 +00:00
detokenizer.py
[Performance] Optimize
get_seqs
(
#7051
)
2024-08-01 18:29:52 -07:00
image_processor.py
[Core] Dynamic image size support for VLMs (
#5276
)
2024-07-02 20:34:00 -07:00
tokenizer.py
[Core] Support dynamically loading Lora adapter from HuggingFace (
#6234
)
2024-07-22 15:42:40 -07:00