This website requires JavaScript.
Explore
Help
Register
Sign In
squall
/
vllm
Watch
1
Star
0
Fork
0
You've already forked vllm
Code
Issues
Pull Requests
Actions
1
Packages
Projects
Releases
Wiki
Activity
f10797c0ce
vllm
/
vllm
/
lora
History
Aaron Pham
21063c11c7
[CI/Build] drop support for Python 3.8 EOL (
#8464
)
...
Signed-off-by: Aaron Pham <contact@aarnphm.xyz>
2024-11-06 07:11:55 +00:00
..
ops
[Kernel][LoRA] Add assertion for punica sgmv kernels (
#7585
)
2024-09-23 18:57:42 +00:00
__init__.py
[Experimental] Add multi-LoRA support (
#1804
)
2024-01-23 15:26:37 -08:00
fully_sharded_layers.py
[Kernel][RFC] Refactor the punica kernel based on Triton (
#5036
)
2024-07-31 17:12:24 -07:00
layers.py
[Bugfix] Fix lora loading for Compressed Tensors in
#9120
(
#9179
)
2024-10-09 12:10:44 +00:00
lora.py
[Model] Add base class for LoRA-supported models (
#5018
)
2024-06-27 16:03:04 +08:00
models.py
[CI/Build] drop support for Python 3.8 EOL (
#8464
)
2024-11-06 07:11:55 +00:00
punica.py
[Kernel][LoRA] Add assertion for punica sgmv kernels (
#7585
)
2024-09-23 18:57:42 +00:00
request.py
[Core] Support Lora lineage and base model metadata management (
#6315
)
2024-09-20 06:20:56 +00:00
utils.py
[Misc][LoRA] Support loading LoRA weights for target_modules in reg format (
#9275
)
2024-10-11 12:31:21 +00:00
worker_manager.py
[Core] Support dynamically loading Lora adapter from HuggingFace (
#6234
)
2024-07-22 15:42:40 -07:00