Tri Dao
36587c01cb
[LayerNorm] Update layer_norm_linear
2024-03-18 23:15:33 -07:00
Markus Krimmel
6bbc532388
fix: cast the alibi slopes to torch.float32 ( #846 )
2024-03-15 00:49:40 -07:00
Grigory Sizov
2a15840f09
Enable paged attention in varlen forward ( #831 )
...
* Enable paged attention in varlen forward
* Format + fix padding
2024-03-15 00:48:19 -07:00
Tri Dao
6c9e60de56
Bump to v2.5.6
2024-03-01 22:09:56 -08:00
Tri Dao
87a1277653
Bump to v2.5.5
2024-02-21 15:58:23 -08:00
Tri Dao
43950dda45
Bump to v2.5.4
2024-02-20 16:30:16 -08:00
Tri Dao
5cdabc2809
Bump to v2.5.3
2024-02-10 01:06:27 -08:00
Tri Dao
a190df011c
Add window_size option to ParallelMHA
2024-02-10 01:02:14 -08:00
Tri Dao
61a7772479
Bump to v2.5.2
2024-01-31 02:44:24 -08:00
Tri Dao
ef0ed10622
Add window_size option to MHA and GPT
2024-01-31 02:42:23 -08:00
Tri Dao
dc72d960a7
[CI] Install torch 2.3 using index
2024-01-30 14:32:29 -08:00
Tri Dao
daf37a9d8a
Bump to v2.5.1
2024-01-29 21:03:38 -08:00
Avelina9X
c94cd09744
Updated missing docstrings for args and returns in bert_padding.py ( #795 )
...
* Updated docstrings of bert_padding.py
Added docstrings for missing arguments in the unpad and pad methods.
* Update bert_padding.py
Fixed spelling mistakes
2024-01-27 09:16:25 -08:00
Tao He
204c3c6d1b
Fixes an error in comment ( #785 )
...
Signed-off-by: Tao He <sighingnow@gmail.com>
2024-01-23 12:38:29 -08:00
Tri Dao
197f2083a2
Bump to v2.5.0
2024-01-22 23:40:10 -08:00
Tri Dao
54e80a3829
Implement page KV cache
...
Co-authored-by: ljss <450993438@qq.com>
2024-01-22 22:47:30 -08:00
Tri Dao
bdcae547c7
[LayerNorm] Don't exit early in the backward pass ( fix #781 )
2024-01-22 22:40:06 -08:00
Tri Dao
e43a4ceaab
[CI] Fix CUDA 12.2.2 compilation
2024-01-21 17:23:39 -08:00
Tri Dao
f9d7376126
Bump to v2.4.3
2024-01-21 17:14:37 -08:00
Curtis "Fjord" Hawthorne
d8aacc510c
return z_loss ( #768 )
2024-01-21 15:23:41 -08:00
Tri Dao
a7b66ae25a
Simplify writing softmax to gmem
2024-01-13 00:25:04 -08:00
Tri Dao
c9861a032d
[LayerNorm] Initialize mean and rstd tensor using x.device
2024-01-09 16:30:31 -08:00
Tri Dao
abbc131173
[LayerNorm] Switch from CUDA to Triton implementation
2024-01-05 00:31:17 -08:00
Tri Dao
f5b308e258
[LayerNorm] Rename layernorm.py -> layer_norm.py
2024-01-05 00:21:03 -08:00
Tri Dao
665b55e2e2
[LayerNorm] Implement parallel layer norm in Triton
2024-01-04 23:15:35 -08:00
Tri Dao
aa5c6438c5
[LayerNorm] Implement rowscale in Triton layernorm
2024-01-04 01:07:03 -08:00
jiaxingli
386e391117
Fix: implement deterministic backward in mha ( #748 )
...
* fix deterministic
* fix deterministic
2024-01-02 18:13:56 -08:00
Tri Dao
1a2c3e8c25
Bump to v2.4.2
2023-12-25 16:28:57 -08:00
Tri Dao
73df3be7d5
Add test for BTLM init
2023-12-25 15:16:27 -08:00
Tri Dao
7ffba9a501
Implement BTLM model
2023-12-24 20:35:12 -08:00
Tri Dao
2e29dacf0c
Implement muParam
2023-12-24 20:34:48 -08:00
Tri Dao
3f7d5786ba
Pass alibi slopes to flash_attn_with_kvcache during generation
2023-12-24 20:31:59 -08:00
Tri Dao
f844852485
Bump to v2.4.1
2023-12-23 21:00:39 -08:00
Tri Dao
732654583c
Implement deterministic backward (thanks to Meituan)
2023-12-23 17:57:36 -08:00
Tri Dao
2c7d7b7396
Implement norm head for Baichuan2
2023-12-22 16:55:40 -08:00
Tri Dao
68f178aa4b
[CI] Don't compile for python 3.7 pytorch 2.2
2023-12-22 10:10:02 -08:00
Tri Dao
7316277303
Bump to v2.4.0
2023-12-22 00:09:53 -08:00
Tri Dao
c3b2196652
Add Alibi to MHA, test with Baichuan-13B
2023-12-21 22:49:55 -08:00
Tri Dao
5ab9b3667b
Clean up alibi, implement non-causal alibi
2023-12-21 22:27:40 -08:00
Tri Dao
bc28eacc60
Format flash_attn_interface.py
2023-12-19 23:13:53 -08:00
Tri Dao
0a146185d6
[Gen] Remove minor dead code
2023-12-19 22:57:39 -08:00
Sanghun Cho
e4f726fc44
Support alibi, by Sanghun Cho from Kakao Brain
...
* hard-code alibi in fwd
* use params.h as hun_heads
* hard-code alibi in bwd
* add alibi on/off option
* compute alibi_start, ratio outside of kernels
* fix minor merge conflict
* add test_alibi.py
* change apply_alibi() location before masking
* add alibi in splitkv kernel
* fix backward func # of returns
* add out-of-bound check in apply_alibi()
* update test_alibi.py
* update test_alibi.py for kvcache
* simplify alibi parameter interface
* fix performance issue
by computing alibi outside of branch
* update test_flash_attn_varlen_func() for left padding
* implement alibi_slopes (b, nh) loading
* optimize apply_alibi() a bit
* update test cases for alibi_slopes loading
* reflect stylistic comments
* disable "seqlenq_ngroups_swapped" when using alibi
---------
Co-authored-by: monk.detective <monk.detective@kakaobrain.com>
2023-12-19 22:56:06 -08:00
Tri Dao
cd089597fd
[LayerNorm] Implement dropout in fused residual + LN/RMSNorm
2023-12-19 16:26:07 -08:00
Tri Dao
08124c8f9c
[CrossEntropy] Implement logit_scale option
2023-12-16 18:39:37 -08:00
Tri Dao
9356a1c038
[LayerNorm] Implement layer_norm_linear
2023-11-30 21:46:07 -08:00
Tri Dao
92dd5703ec
Bump to v2.3.6
2023-11-27 16:23:39 -08:00
Tri Dao
d4a7c8ffbb
[CI] Only compile for CUDA 11.8 & 12.2, MAX_JOBS=2,add torch-nightly
2023-11-27 16:21:28 -08:00
Jeremy Reizenstein
ce3e7280f8
Allow varlen_fwd to take optional seqused_k ( #647 )
...
Co-authored-by: bottler <bottler@users.noreply.github.com>
2023-11-27 00:41:23 -08:00
Tri Dao
23b77c8148
Bump to v2.3.5
2023-11-26 19:08:28 -08:00
Tri Dao
2c3baba4a6
Bump to v2.3.4
2023-11-19 23:21:31 -08:00