Commit Graph

533 Commits

Author SHA1 Message Date
gottlike
42c02f5892
Fix quickstart.rst typo jinja (#1964) 2023-12-07 08:34:44 -08:00
Jie Li
ebede26ebf
Make InternLM follow rope_scaling in config.json (#1956)
Co-authored-by: lijie8 <lijie8@sensetime.com>
2023-12-07 08:32:08 -08:00
Peter Götz
d940ce497e
Fix typo in adding_model.rst (#1947)
adpated -> adapted
2023-12-06 10:04:26 -08:00
Antoni Baum
05ff90b692
Save pytorch profiler output for latency benchmark (#1871)
* Save profiler output

* Apply feedback from code review
2023-12-05 20:55:55 -08:00
dancingpipi
1d9b737e05
Support ChatGLMForConditionalGeneration (#1932)
Co-authored-by: shujunhua1 <shujunhua1@jd.com>
2023-12-05 10:52:48 -08:00
Roy
60dc62dc9e
add custom server params (#1868) 2023-12-03 12:59:18 -08:00
Woosuk Kwon
0f90effc66
Bump up to v0.2.3 (#1903) 2023-12-03 12:27:47 -08:00
Woosuk Kwon
464dd985e3
Fix num_gpus when TP > 1 (#1852) 2023-12-03 12:24:30 -08:00
Massimiliano Pronesti
c07a442854
chore(examples-docs): upgrade to OpenAI V1 (#1785) 2023-12-03 01:11:22 -08:00
Woosuk Kwon
cd3aa153a4
Fix broken worker test (#1900) 2023-12-02 22:17:33 -08:00
Woosuk Kwon
9b294976a2
Add PyTorch-native implementation of custom layers (#1898) 2023-12-02 21:18:40 -08:00
Simon Mo
5313c2cb8b
Add Production Metrics in Prometheus format (#1890) 2023-12-02 16:37:44 -08:00
Woosuk Kwon
5f09cbdb63
Fix broken sampler tests (#1896)
Co-authored-by: Antoni Baum <antoni.baum@protonmail.com>
2023-12-02 16:06:17 -08:00
Simon Mo
4cefa9b49b
[Docs] Update the AWQ documentation to highlight performance issue (#1883) 2023-12-02 15:52:47 -08:00
Jerry
f86bd6190a
Fix the typo in SamplingParams' docstring (#1886) 2023-12-01 02:06:36 -08:00
Woosuk Kwon
e5452ddfd6
Normalize head weights for Baichuan 2 (#1876) 2023-11-30 20:03:58 -08:00
Woosuk Kwon
d06980dfa7
Fix Baichuan tokenizer error (#1874) 2023-11-30 18:35:50 -08:00
Adam Brusselback
66785cc05c
Support chat template and echo for chat API (#1756) 2023-11-30 16:43:13 -08:00
Massimiliano Pronesti
05a38612b0
docs: add instruction for langchain (#1162) 2023-11-30 10:57:44 -08:00
Roy
d27f4bae39
Fix rope cache key error (#1867) 2023-11-30 08:29:28 -08:00
aisensiy
8d8c2f6ffe
Support max-model-len argument for throughput benchmark (#1858) 2023-11-30 08:10:24 -08:00
Woosuk Kwon
51d3cb951d
Remove max_num_seqs in latency benchmark script (#1855) 2023-11-30 00:00:32 -08:00
Woosuk Kwon
e74b1736a1
Add profile option to latency benchmark script (#1839) 2023-11-29 23:42:52 -08:00
Allen
f07c1ceaa5
[FIX] Fix docker build error (#1831) (#1832)
Co-authored-by: Antoni Baum <antoni.baum@protonmail.com>
2023-11-29 23:06:50 -08:00
Jee Li
63b2206ad0
Avoid multiple instantiations of the RoPE class (#1828) 2023-11-29 23:06:27 -08:00
Woosuk Kwon
27feead2f8
Refactor Worker & InputMetadata (#1843) 2023-11-29 22:16:37 -08:00
Michael McCulloch
c782195662
Disable Logs Requests should Disable Logging of requests. (#1779)
Co-authored-by: Michael McCulloch <mjm.gitlab@fastmail.com>
2023-11-29 21:50:02 -08:00
Simon Mo
0f621c2c7d
[Docs] Add information about using shared memory in docker (#1845) 2023-11-29 18:33:56 -08:00
Woosuk Kwon
a9e4574261
Refactor Attention (#1840) 2023-11-29 15:37:31 -08:00
FlorianJoncour
0229c386c5
Better integration with Ray Serve (#1821)
Co-authored-by: FlorianJoncour <florian@zetta-sys.com>
2023-11-29 13:25:43 -08:00
Woosuk Kwon
a7b3e33078
[Fix] Fix RoPE in ChatGLM-32K (#1841) 2023-11-29 13:01:19 -08:00
Zhuohan Li
e19a64c7ef
[FIX] Fix formatting error in main branch (#1822) 2023-11-28 16:56:43 -08:00
Zhuohan Li
1cb4ad8de9 [FIX] Fix formatting error 2023-11-29 00:40:19 +00:00
explainerauthors
6ed068a71a
Use the type BlockTable (#1791) 2023-11-28 16:34:05 -08:00
Zhuohan Li
708e6c18b0
[FIX] Fix class naming (#1803) 2023-11-28 14:08:01 -08:00
Woosuk Kwon
b943890484
Fix OPT param names (#1819) 2023-11-28 11:22:44 -08:00
explainerauthors
a1125ad4df
Correct comments in parallel_state.py (#1818) 2023-11-28 10:19:35 -08:00
ljss
a8b150c595
Init model on GPU to reduce CPU memory footprint (#1796) 2023-11-27 11:18:26 -08:00
Yunmo Chen
665cbcec4b
Added echo function to OpenAI API server. (#1504) 2023-11-26 21:29:17 -08:00
Woosuk Kwon
7c600440f7
Fix model docstrings (#1764) 2023-11-23 23:04:44 -08:00
Yanming W
e0c6f556e8
[Build] Avoid building too many extensions (#1624) 2023-11-23 16:31:19 -08:00
ljss
de23687d16
Fix repetition penalty aligned with huggingface (#1577) 2023-11-22 14:41:44 -08:00
ljss
4cea74c73b
Set top_p=0 and top_k=-1 in greedy sampling (#1748) 2023-11-22 12:51:09 -08:00
Casper
a921d8be9d
[DOCS] Add engine args documentation (#1741) 2023-11-22 12:31:27 -08:00
陈序
094f716bf2
Add stop_token_ids in SamplingParams.__repr__ (#1745) 2023-11-21 20:13:53 -08:00
Zhuohan Li
7d761fe3c1
[FIX] Fix the case when input_is_parallel=False for ScaledActivation (#1737) 2023-11-20 23:56:48 -08:00
Woosuk Kwon
cf35d8f3d7
[BugFix] Fix TP support for AWQ (#1731) 2023-11-20 21:42:45 -08:00
boydfd
4bb6b67188
fix RAM OOM when load large models in tensor parallel mode. (#1395)
Co-authored-by: ran_lin <rlin@thoughtworks.com>
2023-11-20 19:02:42 -08:00
ljss
819b18e7ba
Rewrite torch.repeat_interleave to remove cpu synchronization (#1599) 2023-11-20 17:46:32 -08:00
Zhuofan
19849db573
[Fix] Fix bugs in scheduler (#1727) 2023-11-20 16:10:50 -08:00