Mark Mozolewski
|
786b7f18a5
|
Add code-revision config argument for Hugging Face Hub (#2892)
|
2024-02-17 22:36:53 -08:00 |
|
Roy
|
4efbac6d35
|
Migrate AquilaForCausalLM to LlamaForCausalLM (#2867)
|
2024-02-14 12:30:24 -08:00 |
|
Philipp Moritz
|
317b29de0f
|
Remove Yi model definition, please use LlamaForCausalLM instead (#2854)
Co-authored-by: Roy <jasonailu87@gmail.com>
|
2024-02-13 14:22:22 -08:00 |
|
Philipp Moritz
|
ea356004d4
|
Revert "Refactor llama family models (#2637)" (#2851)
This reverts commit 5c976a7e1a.
|
2024-02-13 09:24:59 -08:00 |
|
Roy
|
5c976a7e1a
|
Refactor llama family models (#2637)
|
2024-02-13 00:09:23 -08:00 |
|
Antoni Baum
|
9b945daaf1
|
[Experimental] Add multi-LoRA support (#1804)
Co-authored-by: Chen Shen <scv119@gmail.com>
Co-authored-by: Shreyas Krishnaswamy <shrekris@anyscale.com>
Co-authored-by: Avnish Narayan <avnish@anyscale.com>
|
2024-01-23 15:26:37 -08:00 |
|
Woosuk Kwon
|
3d1cfbfc74
|
[Minor] Delete Llama tokenizer warnings (#2146)
|
2023-12-16 22:05:18 -08:00 |
|
Woosuk Kwon
|
d06980dfa7
|
Fix Baichuan tokenizer error (#1874)
|
2023-11-30 18:35:50 -08:00 |
|
Simon Mo
|
5ffc0d13a2
|
Migrate linter from pylint to ruff (#1665)
|
2023-11-20 11:58:01 -08:00 |
|
Megha Agarwal
|
b514d3c496
|
Revert MptConfig to MPTConfig (#1668)
|
2023-11-16 01:19:39 -08:00 |
|
GoHomeToMacDonal
|
1a2bbc9301
|
ChatGLM Support (#1261)
|
2023-11-06 16:09:33 -08:00 |
|
Roy
|
e7f579eb97
|
Support Yi model (#1567)
|
2023-11-06 15:26:03 -08:00 |
|
Woosuk Kwon
|
1fe0990023
|
Remove MPTConfig (#1529)
|
2023-11-01 15:29:05 -07:00 |
|
Dan Lord
|
7013a80170
|
Add support for spaces_between_special_tokens
|
2023-10-30 16:52:56 -07:00 |
|
Ricardo Lu
|
beac8dd461
|
fix: don't skip first special token. (#1497)
|
2023-10-29 04:26:36 -07:00 |
|
Lu Wang
|
de89472897
|
Fix the issue for AquilaChat2-* models (#1339)
|
2023-10-13 11:51:29 -07:00 |
|
Woosuk Kwon
|
e7c8555d06
|
Bump up transformers version & Remove MistralConfig (#1254)
|
2023-10-13 10:05:26 -07:00 |
|
Antoni Baum
|
ec3b5ce9cc
|
Improve detokenization performance (#1338)
|
2023-10-13 09:59:07 -07:00 |
|
Federico Cassano
|
66d18a7fb0
|
add support for tokenizer revision (#1163)
Co-authored-by: Zhuohan Li <zhuohan123@gmail.com>
|
2023-10-02 19:19:46 -07:00 |
|
Woosuk Kwon
|
a8e98aee0c
|
Fix Mistral model (#1220)
|
2023-09-28 10:44:05 -07:00 |
|
Chris Bamford
|
bb1ba58f06
|
[Mistral] Mistral-7B-v0.1 support (#1196)
Co-authored-by: timlacroix <t@mistral.ai>
|
2023-09-28 10:41:03 -07:00 |
|
Qing
|
28e616c4e3
|
fix qwen-14b model (#1173)
|
2023-09-27 16:33:16 -07:00 |
|
Woosuk Kwon
|
64ca424e75
|
Fix warning message on LLaMA FastTokenizer (#1037)
|
2023-09-14 17:33:32 -07:00 |
|
Antoni Baum
|
dd54a4b026
|
Fix detokenization leaving special tokens (#1044)
Signed-off-by: Antoni Baum <antoni.baum@protonmail.com>
|
2023-09-14 16:37:03 -07:00 |
|
Jasmond L
|
ab019eea75
|
Add Model Revision Support (#1014)
Co-authored-by: Jasmond Loh <Jasmond.Loh@hotmail.com>
Co-authored-by: Zhuohan Li <zhuohan123@gmail.com>
|
2023-09-13 15:20:02 -07:00 |
|
Antoni Baum
|
9841d48a10
|
Use TGI-like incremental detokenization (#984)
|
2023-09-13 13:38:01 -07:00 |
|
Nelson Liu
|
e15932bb60
|
Only emit warning about internal tokenizer if it isn't being used (#939)
|
2023-09-05 00:50:55 +09:00 |
|
shunxing1234
|
ad5f2fe34c
|
Add support for aquila (#663)
* add aquila
Signed-off-by: ftgreat <ftgreat@163.com>
* fix some bug
Signed-off-by: shunxing1234 <xw747777271@gmail.com>
* delete pdb
Signed-off-by: shunxing1234 <xw747777271@gmail.com>
* fix bugs
Signed-off-by: shunxing1234 <xw747777271@gmail.com>
* fix bugs
Signed-off-by: shunxing1234 <xw747777271@gmail.com>
* delete whitespace
Signed-off-by: shunxing1234 <xw747777271@gmail.com>
* format
* fix order
---------
Signed-off-by: ftgreat <ftgreat@163.com>
Signed-off-by: shunxing1234 <xw747777271@gmail.com>
Co-authored-by: ftgreat <ftgreat@163.com>
|
2023-08-22 00:13:36 -07:00 |
|
Ikko Eltociear Ashimine
|
805de738f6
|
Fix typo in tokenizer.py (#750)
conjuction -> conjunction
|
2023-08-14 22:26:36 -07:00 |
|
Qing
|
a57d13cc96
|
add QWen-7b (#685)
Co-authored-by: wq.chu <wq.chu@tianrang-inc.com>
|
2023-08-08 13:50:38 -07:00 |
|
Zhuohan Li
|
1b0bd0fe8a
|
Add Falcon support (new) (#592)
|
2023-08-02 14:04:39 -07:00 |
|
codethazine
|
20b0d88d16
|
Add support for baichuan (#365)
|
2023-07-17 13:50:55 -07:00 |
|
xcnick
|
c6dfc3cdbe
|
Fix handling of special tokens in decoding. (#418)
|
2023-07-12 11:14:56 -04:00 |
|
Woosuk Kwon
|
ddfdf470ae
|
Add trust_remote_code arg to get_config (#405)
|
2023-07-08 15:24:17 -07:00 |
|
codethazine
|
a945fcc2ae
|
Add trust-remote-code flag to handle remote tokenizers (#364)
|
2023-07-07 11:04:58 -07:00 |
|
Woosuk Kwon
|
404422f42e
|
[Model] Add support for MPT (#334)
|
2023-07-03 16:47:53 -07:00 |
|
Zhuohan Li
|
d6fa1be3a8
|
[Quality] Add code formatter and linter (#326)
|
2023-07-03 11:31:55 -07:00 |
|
Woosuk Kwon
|
998d9d1509
|
[Tokenizer] Add tokenizer mode (#298)
|
2023-06-28 14:19:22 -07:00 |
|
Woosuk Kwon
|
4338cc4750
|
[Tokenizer] Add an option to specify tokenizer (#284)
|
2023-06-28 09:46:58 -07:00 |
|