Austin Veselka
|
eefeb16464
|
[Kernel] Full Tensor Parallelism for LoRA Layers (#3524)
Co-authored-by: Antoni Baum <antoni.baum@protonmail.com>
|
2024-04-27 00:03:48 -07:00 |
|
Woosuk Kwon
|
468d761b32
|
[Misc] Reduce supported Punica dtypes (#4304)
|
2024-04-23 18:54:33 -07:00 |
|
Shoichi Uchinami
|
a53222544c
|
[Kernel] Add punica dimension for Swallow-MS-7B LoRA (#4134)
|
2024-04-17 10:02:45 -07:00 |
|
Jee Li
|
989ae2538d
|
[Kernel] Add punica dimension for Baichuan-13B (#4053)
|
2024-04-13 07:55:05 -07:00 |
|
Antoni Baum
|
1e96c3341a
|
Add extra punica sizes to support bigger vocabs (#4015)
|
2024-04-11 22:18:57 +00:00 |
|
fuchen.ljl
|
08ccee1e83
|
punica fix-bgmv-kernel-640 (#4007)
|
2024-04-11 08:59:26 -07:00 |
|
Jee Li
|
566b57c5c4
|
[Kernel] support non-zero cuda devices in punica kernels (#3636)
|
2024-03-27 00:37:42 +00:00 |
|
Jee Li
|
8af890a865
|
Enable more models to inference based on LoRA (#3382)
Co-authored-by: Antoni Baum <antoni.baum@protonmail.com>
|
2024-03-25 18:09:31 -07:00 |
|
Simon Mo
|
8e67598aa6
|
[Misc] fix line length for entire codebase (#3444)
|
2024-03-16 00:36:29 -07:00 |
|
Or Sharir
|
ae0ccb4017
|
Add missing kernel for CodeLlama-34B on A/H100 (no tensor parallelism) when using Multi-LoRA. (#3350)
|
2024-03-13 12:18:25 -07:00 |
|
Terry
|
0bba88df03
|
Enhance lora tests with more layer and rank variations (#3243)
|
2024-03-09 17:14:16 -08:00 |
|
whyiug
|
c59e120c55
|
Feature add lora support for Qwen2 (#3177)
|
2024-03-07 21:58:24 -08:00 |
|
Woosuk Kwon
|
929b4f2973
|
Add LoRA support for Gemma (#3050)
|
2024-02-28 13:03:28 -08:00 |
|
Woosuk Kwon
|
f8ecb84c02
|
Speed up Punica compilation (#2632)
|
2024-01-27 17:46:56 -08:00 |
|
Antoni Baum
|
9b945daaf1
|
[Experimental] Add multi-LoRA support (#1804)
Co-authored-by: Chen Shen <scv119@gmail.com>
Co-authored-by: Shreyas Krishnaswamy <shrekris@anyscale.com>
Co-authored-by: Avnish Narayan <avnish@anyscale.com>
|
2024-01-23 15:26:37 -08:00 |
|