Woosuk Kwon
|
28c3f12104
|
[Minor] Remove unused code in attention (#2384)
|
2024-01-08 13:13:08 -08:00 |
|
Woosuk Kwon
|
c884819135
|
Fix eager mode performance (#2377)
|
2024-01-08 10:11:06 -08:00 |
|
Nadav Shmayovits
|
05921a9a7a
|
Changed scheduler to use deques instead of lists (#2290)
Co-authored-by: Woosuk Kwon <woosuk.kwon@berkeley.edu>
|
2024-01-07 09:48:07 -08:00 |
|
Iskren Ivov Chernev
|
d0215a58e7
|
Ensure metrics are logged regardless of requests (#2347)
|
2024-01-05 05:24:42 -08:00 |
|
ljss
|
aee8ef661a
|
Miner fix of type hint (#2340)
|
2024-01-03 21:27:56 -08:00 |
|
Woosuk Kwon
|
2e0b6e7757
|
Bump up to v0.2.7 (#2337)
|
2024-01-03 17:35:56 -08:00 |
|
Ronen Schaffer
|
74d8d77626
|
Remove unused const TIMEOUT_TO_PREVENT_DEADLOCK (#2321)
|
2024-01-03 15:49:07 -08:00 |
|
Zhuohan Li
|
fd4ea8ef5c
|
Use NCCL instead of ray for control-plane communication to remove serialization overhead (#2221)
|
2024-01-03 11:30:22 -08:00 |
|
Woosuk Kwon
|
6ef00b03a2
|
Enable CUDA graph for GPTQ & SqueezeLLM (#2318)
|
2024-01-03 09:52:29 -08:00 |
|
Roy
|
9140561059
|
[Minor] Fix typo and remove unused code (#2305)
|
2024-01-02 19:23:15 -08:00 |
|
Jong-hun Shin
|
4934d49274
|
Support GPT-NeoX Models without attention biases (#2301)
|
2023-12-30 11:42:04 -05:00 |
|
Zhuohan Li
|
e0ff920001
|
[BUGFIX] Do not return ignored sentences twice in async llm engine (#2258)
|
2023-12-26 13:41:09 +08:00 |
|
Woosuk Kwon
|
a1b9cb2a34
|
[BugFix] Fix recovery logic for sequence group (#2186)
|
2023-12-20 21:52:37 -08:00 |
|
Woosuk Kwon
|
3a4fd5ca59
|
Disable Ray usage stats collection (#2206)
|
2023-12-20 21:52:08 -08:00 |
|
Antoni Baum
|
bd29cf3d3a
|
Remove Sampler copy stream (#2209)
|
2023-12-20 00:04:33 -08:00 |
|
Hanzhi Zhou
|
31bff69151
|
Make _prepare_sample non-blocking and use pinned memory for input buffers (#2207)
|
2023-12-19 16:52:46 -08:00 |
|
Woosuk Kwon
|
ba4f826738
|
[BugFix] Fix weight loading for Mixtral with TP (#2208)
|
2023-12-19 16:16:11 -08:00 |
|
avideci
|
de60a3fb93
|
Added DeciLM-7b and DeciLM-7b-instruct (#2062)
|
2023-12-19 02:29:33 -08:00 |
|
Woosuk Kwon
|
21d5daa4ac
|
Add warning on CUDA graph memory usage (#2182)
|
2023-12-18 18:16:17 -08:00 |
|
Suhong Moon
|
290e015c6c
|
Update Help Text for --gpu-memory-utilization Argument (#2183)
|
2023-12-18 11:33:24 -08:00 |
|
kliuae
|
1b7c791d60
|
[ROCm] Fixes for GPTQ on ROCm (#2180)
|
2023-12-18 10:41:04 -08:00 |
|
JohnSaxon
|
bbe4466fd9
|
[Minor] Fix typo (#2166)
Co-authored-by: John-Saxon <zhang.xiangxuan@oushu.com>
|
2023-12-17 23:28:49 -08:00 |
|
Harry Mellor
|
08133c4d1a
|
Add SSL arguments to API servers (#2109)
|
2023-12-18 10:56:23 +08:00 |
|
Woosuk Kwon
|
8041b7305e
|
[BugFix] Raise error when max_model_len is larger than KV cache (#2163)
|
2023-12-17 17:08:23 -08:00 |
|
Woosuk Kwon
|
671af2b1c0
|
Bump up to v0.2.6 (#2157)
|
2023-12-17 10:34:56 -08:00 |
|
Woosuk Kwon
|
6f41f0e377
|
Disable CUDA graph for SqueezeLLM (#2161)
|
2023-12-17 10:24:25 -08:00 |
|
Woosuk Kwon
|
2c9b638065
|
[Minor] Fix a typo in .pt weight support (#2160)
|
2023-12-17 10:12:44 -08:00 |
|
Antoni Baum
|
a7347d9a6d
|
Make sampler less blocking (#1889)
|
2023-12-17 23:03:49 +08:00 |
|
Woosuk Kwon
|
30fb0956df
|
[Minor] Add more detailed explanation on quantization argument (#2145)
|
2023-12-17 01:56:16 -08:00 |
|
Woosuk Kwon
|
3a765bd5e1
|
Temporarily enforce eager mode for GPTQ models (#2154)
|
2023-12-17 01:51:12 -08:00 |
|
Woosuk Kwon
|
c3372e87be
|
Remove dependency on CuPy (#2152)
|
2023-12-17 01:49:07 -08:00 |
|
Woosuk Kwon
|
e1d5402238
|
Fix all-reduce memory usage (#2151)
|
2023-12-17 01:44:45 -08:00 |
|
Woosuk Kwon
|
3d1cfbfc74
|
[Minor] Delete Llama tokenizer warnings (#2146)
|
2023-12-16 22:05:18 -08:00 |
|
Woosuk Kwon
|
37ca558103
|
Optimize model execution with CUDA graph (#1926)
Co-authored-by: Chen Shen <scv119@gmail.com>
Co-authored-by: Antoni Baum <antoni.baum@protonmail.com>
|
2023-12-16 21:12:08 -08:00 |
|
Roy
|
eed74a558f
|
Simplify weight loading logic (#2133)
|
2023-12-16 12:41:23 -08:00 |
|
Woosuk Kwon
|
2acd76f346
|
[ROCm] Temporarily remove GPTQ ROCm support (#2138)
|
2023-12-15 17:13:58 -08:00 |
|
CHU Tianxiang
|
0fbfc4b81b
|
Add GPTQ support (#916)
|
2023-12-15 03:04:22 -08:00 |
|
Yunfeng Bai
|
c06170cc8e
|
Add a flag to include stop string in output text (#1976)
|
2023-12-15 00:45:58 -08:00 |
|
mezuzza
|
6774bd50b0
|
Fix typing in AsyncLLMEngine & add toml to requirements-dev (#2100)
|
2023-12-14 00:19:41 -08:00 |
|
Woosuk Kwon
|
31c1f3255e
|
Bump up to v0.2.5 (#2095)
|
2023-12-13 23:56:15 -08:00 |
|
Antoni Baum
|
21d93c140d
|
Optimize Mixtral with expert parallelism (#2090)
|
2023-12-13 23:55:07 -08:00 |
|
Woosuk Kwon
|
f1c8520146
|
[BugFix] Fix input positions for long context with sliding window (#2088)
|
2023-12-13 12:28:13 -08:00 |
|
Woosuk Kwon
|
518369d78c
|
Implement lazy model loader (#2044)
|
2023-12-12 22:21:45 -08:00 |
|
Woosuk Kwon
|
30bad5c492
|
Fix peak memory profiling (#2031)
|
2023-12-12 22:01:53 -08:00 |
|
Megha Agarwal
|
6428f1d051
|
Support MPT with GQA (#1938)
Co-authored-by: Woosuk Kwon <woosuk.kwon@berkeley.edu>
|
2023-12-12 10:16:05 -08:00 |
|
Woosuk Kwon
|
cb3f30c600
|
Upgrade transformers version to 4.36.0 (#2046)
|
2023-12-11 18:39:14 -08:00 |
|
Woosuk Kwon
|
31d2ab4aff
|
Remove python 3.10 requirement (#2040)
|
2023-12-11 12:26:42 -08:00 |
|
Woosuk Kwon
|
4dd4b5c538
|
Bump up to v0.2.4 (#2034)
|
2023-12-11 11:49:39 -08:00 |
|
Woosuk Kwon
|
6120e5aaea
|
Fix import error msg for megablocks (#2038)
|
2023-12-11 11:40:56 -08:00 |
|
Woosuk Kwon
|
81ce2a4b26
|
[Minor] Fix type annotation in Mixtral (#2036)
|
2023-12-11 11:32:39 -08:00 |
|