This website requires JavaScript.
Explore
Help
Register
Sign In
squall
/
vllm
Watch
1
Star
0
Fork
0
You've already forked vllm
Code
Issues
Pull Requests
Actions
1
Packages
Projects
Releases
Wiki
Activity
35bd215168
vllm
/
vllm
/
transformers_utils
History
tastelikefeet
39d3f8d94f
[Bugfix] Fix code for downloading models from modelscope (
#8443
)
2024-09-28 08:24:12 -07:00
..
configs
[Misc] Update config loading for Qwen2-VL and remove Granite (
#8837
)
2024-09-26 07:45:30 -07:00
tokenizer_group
[mypy] Misc. typing improvements (
#7417
)
2024-08-13 09:20:20 +08:00
tokenizers
[Misc] Remove vLLM patch of
BaichuanTokenizer
(
#8921
)
2024-09-28 08:11:25 +00:00
__init__.py
[Bugfix] Fix code for downloading models from modelscope (
#8443
)
2024-09-28 08:24:12 -07:00
config.py
[Misc] Update config loading for Qwen2-VL and remove Granite (
#8837
)
2024-09-26 07:45:30 -07:00
detokenizer.py
[Core][Bugfix] Support prompt_logprobs returned with speculative decoding (
#8047
)
2024-09-24 17:29:56 -07:00
processor.py
[Core][Frontend] Support Passing Multimodal Processor Kwargs (
#8657
)
2024-09-23 07:44:48 +00:00
tokenizer.py
[Misc] Remove vLLM patch of
BaichuanTokenizer
(
#8921
)
2024-09-28 08:11:25 +00:00
utils.py
[Core][Bugfix] Accept GGUF model without .gguf extension (
#8056
)
2024-09-02 08:43:26 -04:00