danthe3rd
|
538d570c96
|
Fix compile error on MSVC
See also: https://stackoverflow.com/questions/55136414/constexpr-variable-captured-inside-lambda-loses-its-constexpr-ness
|
2023-07-19 08:04:57 +00:00 |
|
Tri Dao
|
4f285b3547
|
FlashAttention-2 release
|
2023-07-17 06:21:34 -07:00 |
|
Tri Dao
|
2800efc71f
|
[FT] rotary_cos/sin should have batch_size dimension
|
2023-07-06 15:33:33 -07:00 |
|
Tri Dao
|
3a9bfd076f
|
[FT] rotary_cos/sin should have shape (dim) instead of (seqlen, dim)
|
2023-07-03 09:41:04 -07:00 |
|
Tri Dao
|
62e9814466
|
[Rotary] Make sure frequency calculation is in fp32
|
2023-07-02 16:39:39 -07:00 |
|
Tri Dao
|
27f8f890df
|
[FusedDense] Allocate lt_workspace on input device
|
2023-05-30 14:17:26 -07:00 |
|
Tri Dao
|
48bc6eacd6
|
[Gen] Add rotary base as an argument to FT attention kernel
|
2023-05-30 13:38:34 -07:00 |
|
Tri Dao
|
ad113948a6
|
[Docs] Clearer error message for bwd d > 64, bump to v1.0.4
|
2023-04-26 09:19:48 -07:00 |
|
Tri Dao
|
311d6606bf
|
[Gen] Fix FT kernel smem size, CG when batch size changed
|
2023-04-20 17:03:13 -07:00 |
|
Kirthi Shankar Sivamani
|
45567a25a2
|
only 1 thread writes to global mem in fprop
Signed-off-by: Kirthi Shankar Sivamani <ksivamani@nvidia.com>
|
2023-04-15 06:09:41 +00:00 |
|
Kirthi Shankar Sivamani
|
7d25a4ec4f
|
Handle FlashAttnQKVPackedSplitFunc by making rng_state optional in backward
Signed-off-by: Kirthi Shankar Sivamani <ksivamani@nvidia.com>
|
2023-04-13 06:25:52 +00:00 |
|
Kirthi Shankar Sivamani
|
315fd31f0c
|
Merge branch 'HazyResearch:main' into enable_cuda_graph_capture
|
2023-04-12 22:42:24 -07:00 |
|
Kirthi Shankar Sivamani
|
31018c5fa0
|
Support CUDA graph capture
Signed-off-by: Kirthi Shankar Sivamani <ksivamani@nvidia.com>
|
2023-04-12 16:53:22 -07:00 |
|
Tri Dao
|
dec4f2e910
|
[FusedDense] Set workspace size to 32M for Hopper and 4M for others
|
2023-04-06 23:40:15 -07:00 |
|
Tri Dao
|
393882bc08
|
[LayerNorm] Implement LN with parallel residual, support dim 8k
|
2023-03-31 14:23:45 -07:00 |
|
Tri Dao
|
f5d0fbd468
|
[FT] Fix FT's single query attention for bf16 hdim128 rotary
|
2023-03-28 21:27:00 -07:00 |
|
Tri Dao
|
dc08ea1c33
|
Support H100 for other CUDA extensions
|
2023-03-15 16:59:27 -07:00 |
|
Tri Dao
|
1b18f1b7a1
|
Support H100
|
2023-03-15 14:59:02 -07:00 |
|
Tri Dao
|
e45a46a5b7
|
[Rotary] Implement GPT-J style (interleaved) rotary
|
2023-03-14 14:35:53 -07:00 |
|
Tri Dao
|
6b4a48218e
|
[FA] Remove unused variable rng_engine_inputs
|
2023-01-25 15:32:40 -08:00 |
|
Tri Dao
|
eb33e587e9
|
[LayerNorm] Rename x1 -> residual
|
2023-01-19 13:07:27 -08:00 |
|
Tri Dao
|
88173a1aaf
|
[FusedDense] Support relu, rename FusedDenseGeluDense -> FusedMLP
|
2023-01-17 18:12:27 -08:00 |
|
Tri Dao
|
f1e01c27ba
|
[Gen] Pass qkv_stride to ft_attention kernel for batched generation
|
2023-01-15 15:20:01 -08:00 |
|
Tri Dao
|
7c2191542a
|
[Gen] Make generation work with Tensor Parallel
|
2023-01-15 11:34:27 -08:00 |
|
Tri Dao
|
6738d9477d
|
[LayerNorm] Implement RMS Norm
|
2023-01-06 17:34:22 -08:00 |
|
Tri Dao
|
a1f49a2b92
|
[Compilation] Change BOOL_SWITCH to fix Windows compilation
Follow xFormers's DISTPATCH_BOOL. Haven't tested it on Windows.
|
2023-01-06 14:40:58 -08:00 |
|
Tri Dao
|
be1afaa276
|
[Gen, FT] Use fp32 accum for FMA
|
2023-01-03 22:09:22 -08:00 |
|
Tri Dao
|
f266fc7262
|
[Gen, FT] Use tlength instead of params.timestep for rotary
|
2023-01-03 17:46:55 -08:00 |
|
Tri Dao
|
a01d1213d7
|
[Gen] Add kernel from FasterTransformer for benchmarking
|
2023-01-03 17:37:43 -08:00 |
|
Tri Dao
|
a8cfe51551
|
Implement Tensor Parallel for transformer Block
|
2022-12-25 14:08:21 -08:00 |
|
Tri Dao
|
1e712ea8b0
|
Implement TensorParallel for MHA
|
2022-12-25 11:39:55 -08:00 |
|
Tri Dao
|
226a1b721d
|
Implement TensorParallel for FusedDense and FusedDenseGeluDense
|
2022-12-24 11:48:56 -08:00 |
|
Tri Dao
|
dff68c2b22
|
Add smoothing for CrossEntropyParallel, rename to CrossEntropyLoss
|
2022-12-23 14:51:08 -08:00 |
|
Tri Dao
|
e68ebbe89a
|
Simplify FusedDense
|
2022-12-22 21:25:31 -08:00 |
|
Tri Dao
|
5db330519a
|
[LayerNorm] Support taking subset of input or subset of output
|
2022-12-12 22:16:14 -08:00 |
|
Tri Dao
|
ae137ed17a
|
[LayerNorm] Fuse LayerScale
|
2022-12-10 23:28:23 -08:00 |
|
Tri Dao
|
8c6609ae1a
|
[LayerNorm] Support all dimensions up to 6k (if divisible by 8)
|
2022-12-09 02:06:22 -08:00 |
|
Tri Dao
|
8a2ece89f7
|
Simplify BOOL_SWITCH macro to fix compiling error on gcc 7
|
2022-12-06 14:38:32 -08:00 |
|
Tri Dao
|
0bf5e50038
|
Release training code
|
2022-11-28 17:34:40 -08:00 |
|
Tri Dao
|
9bc63d1e2d
|
Fix typo in comments
|
2022-11-25 16:35:08 -08:00 |
|
Tri Dao
|
d95ee1a95d
|
Speed up compilation by splitting into separate .cu files
|
2022-11-25 16:30:18 -08:00 |
|
Tri Dao
|
39ed597b28
|
[LayerNorm] Compile for both sm70 and sm80
|
2022-11-17 11:45:11 -08:00 |
|
Tri Dao
|
43ab0b5205
|
Mention that some CUDA extensions have only been tested on A100s
|
2022-11-15 07:10:25 -08:00 |
|
Tri Dao
|
e4d3013e15
|
[LayerNorm] Check cuda error after querying ctas_per_sm
|
2022-11-15 07:05:13 -08:00 |
|
Tri Dao
|
2e33fc8e36
|
Add GPT and ViT models
|
2022-11-13 22:30:23 -08:00 |
|
Tri Dao
|
fa6d1ce44f
|
Add fused_dense and dropout_add_layernorm CUDA extensions
|
2022-11-13 21:59:20 -08:00 |
|
Tri Dao
|
7c9953815a
|
Add fused cross entropy loss
|
2022-11-12 21:58:41 -08:00 |
|
Tri Dao
|
6998e0ecdb
|
Fix out-of-bound memory read
|
2022-11-09 09:34:14 -08:00 |
|
Tri Dao
|
557781933d
|
Parallelize CUDA bwd along seqlen_k instead of seqlen_q
This is faster since we only need to do atomic adds on dq, instead of atomic
adds on both dk and dv.
|
2022-11-05 16:26:17 -07:00 |
|
Tri Dao
|
ca81f32e04
|
Implement rotary embedding in CUDA
|
2022-11-04 22:42:01 -07:00 |
|