This website requires JavaScript.
Explore
Help
Register
Sign In
squall
/
vllm
Watch
1
Star
0
Fork
0
You've already forked vllm
Code
Issues
Pull Requests
Actions
1
Packages
Projects
Releases
Wiki
Activity
f3ff63c3f4
vllm
/
vllm
/
model_executor
History
Lucas Wilkinson
cd7edc4e87
[Bugfix] Fix empty (nullptr) channelwise scales when loading wNa16 using compressed tensors (
#6798
)
2024-07-25 15:05:09 -07:00
..
guided_decoding
[Bugfix] use diskcache in outlines _get_guide
#5436
(
#6203
)
2024-07-08 11:23:24 -07:00
layers
[Bugfix] Fix empty (nullptr) channelwise scales when loading wNa16 using compressed tensors (
#6798
)
2024-07-25 15:05:09 -07:00
model_loader
[Bugfix]fix modelscope compatible issue (
#6730
)
2024-07-24 05:04:46 -07:00
models
[Model] Adding support for MiniCPM-V (
#4087
)
2024-07-24 20:59:30 -07:00
__init__.py
[Core] Refactor Attention Take 2 (
#3462
)
2024-03-25 04:39:33 +00:00
custom_op.py
[Hardware][Intel GPU] Add Intel GPU(XPU) inference backend (
#3814
)
2024-06-17 11:01:25 -07:00
pooling_metadata.py
[Model][Misc] Add e5-mistral-7b-instruct and Embedding API (
#3734
)
2024-05-11 11:30:37 -07:00
sampling_metadata.py
[Misc] Consolidate and optimize logic for building padded tensors (
#6541
)
2024-07-20 04:17:24 +00:00
utils.py
[Hardware][Neuron] Refactor neuron support (
#3471
)
2024-03-22 01:22:17 +00:00