Tri Dao
b32efb1a4d
Don't need to reduce row_sum during online softmax
2024-02-20 13:33:38 -08:00
Tri Dao
d9a5cb291c
Fix dv = torch::empty_like(k) for mha_bwd_varlen as well
2024-02-10 01:03:00 -08:00
Brian Hirsh
2423cca3ad
fix backward for when query and key have different contiguity ( #818 )
2024-02-10 01:01:27 -08:00
Grigory Sizov
4687936413
Fix Windows build ( #816 )
2024-02-07 17:41:53 -08:00
Jeremy Reizenstein
0658e320f6
Preprocessor switches to control functionality ( #788 )
...
For faster and smaller builds in some simple cases,
provide switches to allow disabling
-backward
-alibi
-uneven k
-dropout
-local attention
Co-authored-by: Jeremy Francis Reizenstein <bottler@users.noreply.github.com>
2024-01-29 20:44:23 -08:00
Tri Dao
54e80a3829
Implement page KV cache
...
Co-authored-by: ljss <450993438@qq.com>
2024-01-22 22:47:30 -08:00
Tri Dao
36bc29edf7
Use int64_t instead of uint32_t in kernel_traits.h
2024-01-22 22:39:29 -08:00
Tri Dao
000b67f5d8
Use int64_t instead of uint32_t for index_t
2024-01-22 11:25:50 -08:00
Tri Dao
ea8a25ca38
Remove configure in bwd kernel launch
2024-01-21 15:28:33 -08:00
Grigory Sizov
af01244ddd
Add split-kv and M<->H swap to varlen forward decoding attention ( #754 )
...
* Add split-k, M<->H to varseq path
* skip M<->H when dropout>0, fix LSE
2024-01-21 15:28:36 -08:00
Tri Dao
8f4d82cf5e
Update cutlass to v3.4.0
2024-01-20 22:30:06 -08:00
Tri Dao
395e5a0dba
Move rotary device functions to a separate file
2024-01-20 18:01:18 -08:00
Tri Dao
3e2c827d9a
Remove unused kernel_traits file
2024-01-20 17:41:44 -08:00
Tri Dao
66a127aef8
Refactor masking in fwd pass into 1 object
2024-01-20 17:39:53 -08:00
Tri Dao
ed4959b2eb
Change inline to __forceinline__, use __grid_constant__ param
2024-01-20 17:38:47 -08:00
Tri Dao
6f706eff96
Make Softmax an object
2024-01-19 16:09:31 -08:00
Tri Dao
4ea866ca19
Make Alibi an object
2024-01-15 00:07:11 -08:00
Tri Dao
5aca153d6d
Move bwd preprocess kernels to a separate file
2024-01-14 16:57:03 -08:00
Tri Dao
df1418f9db
Move softmax_rescale_o to softmax.h
2024-01-14 15:06:06 -08:00
Tri Dao
6777336a1c
Move masking to a separate file (mask.h)
2024-01-14 12:43:47 -08:00
Tri Dao
9448264ddd
Remove seqq_parallel backward kernel that's not used
2024-01-14 12:25:49 -08:00
Tri Dao
1274ec3e7e
Move dropout to a separate file (dropout.h)
2024-01-14 12:19:17 -08:00
Tri Dao
10dad61277
apply_dropout now takes tensor of rowcol layout
2024-01-14 01:03:23 -08:00
Tri Dao
d9cbcfb41c
Remove dead code in philox.cuh
2024-01-13 02:02:03 -08:00
Tri Dao
a7b66ae25a
Simplify writing softmax to gmem
2024-01-13 00:25:04 -08:00
Tri Dao
8d1b169ed1
Simplify SmemLayoutVtransposed in kernel_traits.h
2024-01-12 11:53:29 -08:00
Tri Dao
0842ec0da4
Don't dispatch to local if window size >= seqlen_k
2023-12-23 20:59:26 -08:00
Tri Dao
732654583c
Implement deterministic backward (thanks to Meituan)
2023-12-23 17:57:36 -08:00
Tri Dao
5ab9b3667b
Clean up alibi, implement non-causal alibi
2023-12-21 22:27:40 -08:00
Sanghun Cho
e4f726fc44
Support alibi, by Sanghun Cho from Kakao Brain
...
* hard-code alibi in fwd
* use params.h as hun_heads
* hard-code alibi in bwd
* add alibi on/off option
* compute alibi_start, ratio outside of kernels
* fix minor merge conflict
* add test_alibi.py
* change apply_alibi() location before masking
* add alibi in splitkv kernel
* fix backward func # of returns
* add out-of-bound check in apply_alibi()
* update test_alibi.py
* update test_alibi.py for kvcache
* simplify alibi parameter interface
* fix performance issue
by computing alibi outside of branch
* update test_flash_attn_varlen_func() for left padding
* implement alibi_slopes (b, nh) loading
* optimize apply_alibi() a bit
* update test cases for alibi_slopes loading
* reflect stylistic comments
* disable "seqlenq_ngroups_swapped" when using alibi
---------
Co-authored-by: monk.detective <monk.detective@kakaobrain.com>
2023-12-19 22:56:06 -08:00
Jeremy Reizenstein
ce3e7280f8
Allow varlen_fwd to take optional seqused_k ( #647 )
...
Co-authored-by: bottler <bottler@users.noreply.github.com>
2023-11-27 00:41:23 -08:00
Tri Dao
b4bf9cc1f3
Fix performance regression with causal
2023-11-26 19:07:25 -08:00
Tri Dao
db2f80692c
Write zero to out / grad if seqlen_q or seqlen_k is zero
2023-11-19 22:20:01 -08:00
Driss Guessous
dc4b9ad6c4
add checks ( #640 )
2023-11-19 20:43:27 -08:00
Tri Dao
5a83425442
Change constexpr int to constexpr static int
2023-10-08 16:26:33 -07:00
Tri Dao
e279bf8ed9
[Gen] Accept cache_batch_idx to index into the KV cache
2023-10-03 16:27:26 -07:00
Tri Dao
083e8f525f
Implement local attention
...
Co-authored-by: Timothee Lacroix <t@mistral.ai>
2023-09-26 16:31:08 -07:00
Tri Dao
65c234ed90
Don't over-allocate dq_accum in case of varlen
2023-09-24 00:36:07 -07:00
Tri Dao
1879e089c7
Reduce number of templates for headdim > 128
2023-09-23 22:24:30 -07:00
Tri Dao
2d8ea9a530
Swap seqlen_q and ngroups when seqlen_q=1 (h/t Daniel Haziza)
2023-09-20 23:38:22 -07:00
Tri Dao
3250ff3d82
Swap seqlen_q, nheads for MQA when seqlen_q=1 for fwd (h/t Daniel H)
2023-09-18 14:52:16 -07:00
Tri Dao
43617deab9
Remove template for (IsEvenMN=T, IsEvenK=F) to speed up compilation
2023-09-18 12:21:36 -07:00
Tri Dao
c984208ddb
Set block size to 64 x 64 for kvcache to avoid nvcc segfaults
2023-09-17 16:14:58 -07:00
Tri Dao
ccbb14f38e
Implement rotary embedding in flash_attn_with_kvcache
2023-09-16 01:20:16 -07:00
Tri Dao
56b7fc6ee0
Simplify the implementation of KVcache attn by appending KV first
2023-09-13 15:55:48 -07:00
Tri Dao
bb9beb3645
Remove some unused headers
2023-09-12 12:37:10 -07:00
Tri Dao
ee77b931b9
Swap seqlen_q and nheads for MQA to speed it up (h/t Daniel Haziza)
2023-09-10 22:56:33 -07:00
Tri Dao
37c6e05406
Implement flash_attn_with_kvcache
2023-09-04 00:11:44 -07:00
Tri Dao
6a89b2f121
Remove constexpr in launch template to fix CI compilation
2023-09-03 22:59:41 -07:00
Tri Dao
1dc1b6c8f2
Bump to v2.1.2
2023-09-03 22:23:05 -07:00