Shoichi Uchinami
|
a53222544c
|
[Kernel] Add punica dimension for Swallow-MS-7B LoRA (#4134)
|
2024-04-17 10:02:45 -07:00 |
|
Jee Li
|
989ae2538d
|
[Kernel] Add punica dimension for Baichuan-13B (#4053)
|
2024-04-13 07:55:05 -07:00 |
|
Antoni Baum
|
1e96c3341a
|
Add extra punica sizes to support bigger vocabs (#4015)
|
2024-04-11 22:18:57 +00:00 |
|
Jee Li
|
566b57c5c4
|
[Kernel] support non-zero cuda devices in punica kernels (#3636)
|
2024-03-27 00:37:42 +00:00 |
|
Jee Li
|
8af890a865
|
Enable more models to inference based on LoRA (#3382)
Co-authored-by: Antoni Baum <antoni.baum@protonmail.com>
|
2024-03-25 18:09:31 -07:00 |
|
Or Sharir
|
ae0ccb4017
|
Add missing kernel for CodeLlama-34B on A/H100 (no tensor parallelism) when using Multi-LoRA. (#3350)
|
2024-03-13 12:18:25 -07:00 |
|
Woosuk Kwon
|
929b4f2973
|
Add LoRA support for Gemma (#3050)
|
2024-02-28 13:03:28 -08:00 |
|
Antoni Baum
|
9b945daaf1
|
[Experimental] Add multi-LoRA support (#1804)
Co-authored-by: Chen Shen <scv119@gmail.com>
Co-authored-by: Shreyas Krishnaswamy <shrekris@anyscale.com>
Co-authored-by: Avnish Narayan <avnish@anyscale.com>
|
2024-01-23 15:26:37 -08:00 |
|