This website requires JavaScript.
Explore
Help
Register
Sign In
squall
/
vllm
Watch
1
Star
0
Fork
0
You've already forked vllm
Code
Issues
Pull Requests
Actions
1
Packages
Projects
Releases
Wiki
Activity
a1242324c9
vllm
/
vllm
/
lora
History
raywanb
97b030005c
[Model] LoRA gptbigcode implementation (
#3949
)
2024-05-22 13:58:59 -07:00
..
__init__.py
[Experimental] Add multi-LoRA support (
#1804
)
2024-01-23 15:26:37 -08:00
fully_sharded_layers.py
[Bugfix] Fixed error in slice_lora_b for MergedQKVParallelLinearWithLora (
#4609
)
2024-05-07 10:59:07 -07:00
layers.py
[Lora] Support long context lora (
#4787
)
2024-05-18 16:05:23 +09:00
lora.py
[Mypy] Typing lora folder (
#4337
)
2024-04-25 19:13:50 +00:00
models.py
[Model] LoRA gptbigcode implementation (
#3949
)
2024-05-22 13:58:59 -07:00
punica.py
[Kernel] Full Tensor Parallelism for LoRA Layers (
#3524
)
2024-04-27 00:03:48 -07:00
request.py
[Lora] Support long context lora (
#4787
)
2024-05-18 16:05:23 +09:00
utils.py
[Lora] Support long context lora (
#4787
)
2024-05-18 16:05:23 +09:00
worker_manager.py
[Lora] Support long context lora (
#4787
)
2024-05-18 16:05:23 +09:00