Tri Dao
|
63670fd84a
|
Implement generation for GPT
|
2022-12-27 21:01:50 -08:00 |
|
Tri Dao
|
9d797d8848
|
Support loading GPT2 weights from Huggingface
|
2022-12-27 11:22:48 -08:00 |
|
Tri Dao
|
c6ecd40a59
|
Tweak CrossEntropyLoss to take process_group in init
|
2022-12-27 10:47:43 -08:00 |
|
Tri Dao
|
b4018a5028
|
Implement Tensor Parallel for GPT model
|
2022-12-26 16:22:43 -08:00 |
|
Tri Dao
|
78225c5366
|
Implement Tensor Parallel for GPT2Embeddings
|
2022-12-25 14:29:53 -08:00 |
|
Tri Dao
|
a8cfe51551
|
Implement Tensor Parallel for transformer Block
|
2022-12-25 14:08:21 -08:00 |
|
Tri Dao
|
1e712ea8b0
|
Implement TensorParallel for MHA
|
2022-12-25 11:39:55 -08:00 |
|
Tri Dao
|
226a1b721d
|
Implement TensorParallel for FusedDense and FusedDenseGeluDense
|
2022-12-24 11:48:56 -08:00 |
|
Tri Dao
|
dff68c2b22
|
Add smoothing for CrossEntropyParallel, rename to CrossEntropyLoss
|
2022-12-23 14:51:08 -08:00 |
|
Tri Dao
|
e68ebbe89a
|
Simplify FusedDense
|
2022-12-22 21:25:31 -08:00 |
|
Tri Dao
|
496e4f528c
|
Implement XPos (Sun et al.)
|
2022-12-21 14:17:58 -08:00 |
|
Tri Dao
|
13cdceb377
|
Implement last_layer_subset optimization for BERT
|
2022-12-19 22:18:46 -08:00 |
|
Tri Dao
|
5fb6df0e04
|
Implement BERT
|
2022-12-18 21:47:27 -08:00 |
|
Alexander Ploshkin
|
ee8984d2be
|
add asserts for sin shape
|
2022-12-17 13:34:57 +04:00 |
|
Alexander Ploshkin
|
c7c66976cc
|
fix slicing dimensions
|
2022-12-16 15:39:06 +04:00 |
|
Alexander Ploshkin
|
96656b9323
|
Remove redundant shape asserts in rotary embeddings
|
2022-12-15 18:13:21 +04:00 |
|
Tri Dao
|
6b5f271c6d
|
[Triton] Avoid einops repeat by using Tensor.expand
|
2022-12-14 14:48:41 -08:00 |
|
Tri Dao
|
88c4e5dbf6
|
Fix the case when dout is not contiguous
|
2022-12-13 13:58:17 -08:00 |
|
Tri Dao
|
5db330519a
|
[LayerNorm] Support taking subset of input or subset of output
|
2022-12-12 22:16:14 -08:00 |
|
Tri Dao
|
ae137ed17a
|
[LayerNorm] Fuse LayerScale
|
2022-12-10 23:28:23 -08:00 |
|
Tri Dao
|
8c6609ae1a
|
[LayerNorm] Support all dimensions up to 6k (if divisible by 8)
|
2022-12-09 02:06:22 -08:00 |
|
Tri Dao
|
1feb94265c
|
[ViT] Use dropout_add_ln for the 1st layer norm
|
2022-11-23 12:48:56 -08:00 |
|
Tri Dao
|
b8ccd20098
|
[Triton] Fix variable name from qkv to kv (h/t FrankZijlstra)
|
2022-11-22 02:07:32 -08:00 |
|
Tri Dao
|
054816177e
|
Bump version to 0.2.1
|
2022-11-20 22:35:59 -08:00 |
|
Tri Dao
|
0fa5c0d7ef
|
Add PatchEmbed
|
2022-11-17 16:56:06 -08:00 |
|
Tri Dao
|
ece539abd6
|
Add __init__.py files to subdirectories for installation
|
2022-11-17 16:55:44 -08:00 |
|
Tri Dao
|
71f674ae23
|
[Rotary] Customize base, support seqlen_offset
|
2022-11-17 11:43:36 -08:00 |
|
Tri Dao
|
2e33fc8e36
|
Add GPT and ViT models
|
2022-11-13 22:30:23 -08:00 |
|
Tri Dao
|
d4b320b31f
|
Add MLP, MHA, Block, Embedding modules
|
2022-11-13 22:06:44 -08:00 |
|
Tri Dao
|
fa6d1ce44f
|
Add fused_dense and dropout_add_layernorm CUDA extensions
|
2022-11-13 21:59:20 -08:00 |
|
Tri Dao
|
343492ec30
|
Make nccl operations async in CrossEntropyLossParallel
|
2022-11-13 17:27:26 -08:00 |
|
Tri Dao
|
7c9953815a
|
Add fused cross entropy loss
|
2022-11-12 21:58:41 -08:00 |
|
Tri Dao
|
55797f32c9
|
Remove RotaryEmbedding from FlashAttention module
To avoid import error if one doesn't have rotary_emb installed
|
2022-11-10 11:54:36 -08:00 |
|
Tri Dao
|
908a5b2244
|
Set num_warps=4 for headdim=64 in Triton fw (h/t Michael Benesty)
|
2022-11-07 08:58:16 -08:00 |
|
Tri Dao
|
7479757191
|
Fix pipelining bug in Triton bwd with bias_type=matrix
|
2022-11-06 11:50:35 -08:00 |
|
Tri Dao
|
557781933d
|
Parallelize CUDA bwd along seqlen_k instead of seqlen_q
This is faster since we only need to do atomic adds on dq, instead of atomic
adds on both dk and dv.
|
2022-11-05 16:26:17 -07:00 |
|
Tri Dao
|
ca81f32e04
|
Implement rotary embedding in CUDA
|
2022-11-04 22:42:01 -07:00 |
|
Tri Dao
|
62025e1aff
|
Fix more race condition in Triton bwd when there's bias
|
2022-11-04 12:53:09 -07:00 |
|
Tri Dao
|
ff78ea4123
|
Fix race condition in Triton bwd when there's bias
|
2022-11-04 11:20:27 -07:00 |
|
Tri Dao
|
86862cfd7b
|
Implement attention bias for Triton version
|
2022-11-04 10:33:54 -07:00 |
|
Tri Dao
|
470010f59b
|
Fix race condition for Triton bwd for headdim 48 and 96
|
2022-11-03 15:52:40 -07:00 |
|
Tri Dao
|
aacc10fbab
|
Fix race condition in Triton bwd for non-po2 headdims
|
2022-11-02 07:32:54 -07:00 |
|
Tri Dao
|
1fb12afdfb
|
Avoid memcpy in the Triton bwd
|
2022-11-01 15:06:45 -07:00 |
|
Tri Dao
|
731f154de3
|
Fix race conditions in the Triton bwd for headdim=64
|
2022-11-01 15:05:55 -07:00 |
|
Tri Dao
|
9b0bc97872
|
Fix race condition in Triton fwd
|
2022-10-31 14:34:57 -07:00 |
|
Tri Dao
|
215930bce3
|
Fix EVEN_M & EVEN_HEADDIM for headdim=40 in Triton bwd
|
2022-10-31 01:41:49 -07:00 |
|
Tri Dao
|
4f81aff46e
|
Add debug_barrier for all headdims in Triton bwd
|
2022-10-31 01:25:02 -07:00 |
|
Tri Dao
|
bedcbd6a71
|
Disable some autotune configs that give wrong results in Triton bwd
|
2022-10-31 01:05:51 -07:00 |
|
Tri Dao
|
e78d509c64
|
[WIP] Support all head dimensions up to 128 in the Triton bwd
WIP because there seems to be some race conditions for head dimensions other
than 16, 32, 64, 128.
|
2022-10-31 00:46:22 -07:00 |
|
Tri Dao
|
008951f1d9
|
Support all head dimensions up to 128 in the Triton fwd
|
2022-10-30 22:10:48 -07:00 |
|