This website requires JavaScript.
Explore
Help
Register
Sign In
squall
/
vllm
Watch
1
Star
0
Fork
0
You've already forked vllm
Code
Issues
Pull Requests
Actions
1
Packages
Projects
Releases
Wiki
Activity
ef99a78760
vllm
/
vllm
/
transformers_utils
History
Patrick von Platen
6fc4e6e07a
[Model] Add Mistral Tokenization to improve robustness and chat encoding (
#7739
)
2024-08-27 12:40:02 +00:00
..
configs
[Speculative Decoding] EAGLE Implementation with Top-1 proposer (
#6830
)
2024-08-22 02:42:24 -07:00
tokenizer_group
[mypy] Misc. typing improvements (
#7417
)
2024-08-13 09:20:20 +08:00
tokenizers
[Model] Add Mistral Tokenization to improve robustness and chat encoding (
#7739
)
2024-08-27 12:40:02 +00:00
__init__.py
[Tokenizer] Add an option to specify tokenizer (
#284
)
2023-06-28 09:46:58 -07:00
config.py
[Speculative Decoding] EAGLE Implementation with Top-1 proposer (
#6830
)
2024-08-22 02:42:24 -07:00
detokenizer.py
[Model] Add Mistral Tokenization to improve robustness and chat encoding (
#7739
)
2024-08-27 12:40:02 +00:00
image_processor.py
[Core] Dynamic image size support for VLMs (
#5276
)
2024-07-02 20:34:00 -07:00
tokenizer.py
[Model] Add Mistral Tokenization to improve robustness and chat encoding (
#7739
)
2024-08-27 12:40:02 +00:00