Ikko Eltociear Ashimine
|
419ea45b64
|
fix typo in default.yaml
additionaly -> additionally
|
2023-01-21 00:47:12 +09:00 |
|
Tri Dao
|
33e0860c9c
|
Bump to v0.2.8
|
2023-01-19 13:17:19 -08:00 |
|
Tri Dao
|
eb33e587e9
|
[LayerNorm] Rename x1 -> residual
|
2023-01-19 13:07:27 -08:00 |
|
Tri Dao
|
f68d41ec77
|
[Gen] Add OPT to generation test
|
2023-01-17 19:59:06 -08:00 |
|
Tri Dao
|
88173a1aaf
|
[FusedDense] Support relu, rename FusedDenseGeluDense -> FusedMLP
|
2023-01-17 18:12:27 -08:00 |
|
Tri Dao
|
780e8eeabb
|
[ViT] Support timm checkpoint, add tests
|
2023-01-16 01:20:34 -08:00 |
|
Tri Dao
|
2ec7d3f72c
|
Merge pull request #105 from jamaliki/patch-1
Change default dropout value in documentation
|
2023-01-15 23:01:20 -08:00 |
|
Tri Dao
|
ef085cfcda
|
[ViT] Fix extra norm_0, use new LN order in Block
|
2023-01-15 22:58:56 -08:00 |
|
Tri Dao
|
ff34123bd4
|
Reorder LN in Block, support OPT
|
2023-01-15 22:14:31 -08:00 |
|
Tri Dao
|
f1e01c27ba
|
[Gen] Pass qkv_stride to ft_attention kernel for batched generation
|
2023-01-15 15:20:01 -08:00 |
|
Tri Dao
|
7c2191542a
|
[Gen] Make generation work with Tensor Parallel
|
2023-01-15 11:34:27 -08:00 |
|
Kiarash Jamali
|
41cb909741
|
Change default dropout value in documentation
Documentation says default is 0.1, but the code has attention_dropout default at 0.0
|
2023-01-13 10:50:07 +00:00 |
|
Tri Dao
|
d509832426
|
[Compilation] Add _NO_HALF2 flags to be consistent with Pytorch
eb7b89771e/cmake/Dependencies.cmake (L1693)
|
2023-01-12 22:15:41 -08:00 |
|
Tri Dao
|
f95c2fc108
|
[Gen] Remove commented code
|
2023-01-07 19:06:39 -08:00 |
|
Tri Dao
|
b48599002a
|
[Gen] Add timing option
|
2023-01-07 19:05:09 -08:00 |
|
Tri Dao
|
0938298e4c
|
[Gen] Adjust shape of kv_cache when using FT
|
2023-01-07 17:27:54 -08:00 |
|
Tri Dao
|
e02fd588aa
|
[Gen] Implement top-k and top-p sampling
|
2023-01-07 17:00:02 -08:00 |
|
Tri Dao
|
11be742aa3
|
[Gen] Test generation with rotary embedding
|
2023-01-07 14:37:54 -08:00 |
|
Tri Dao
|
8d9674ed08
|
Merge pull request #102 from Lamikins/main
fixed cross attention typeerror
|
2023-01-07 13:56:20 -08:00 |
|
Tri Dao
|
93383bd55b
|
[TP] Implement TensorParallel without sequence parallel
|
2023-01-07 13:45:22 -08:00 |
|
Darius Lam
|
aec35fd67c
|
fixed cross attention typeerror
|
2023-01-07 12:58:41 -08:00 |
|
Tri Dao
|
ce26d3d73d
|
Bump to v0.2.7
|
2023-01-06 17:37:30 -08:00 |
|
Tri Dao
|
6738d9477d
|
[LayerNorm] Implement RMS Norm
|
2023-01-06 17:34:22 -08:00 |
|
Tri Dao
|
a1f49a2b92
|
[Compilation] Change BOOL_SWITCH to fix Windows compilation
Follow xFormers's DISTPATCH_BOOL. Haven't tested it on Windows.
|
2023-01-06 14:40:58 -08:00 |
|
Tri Dao
|
a668890fcd
|
[Gen] Add option to run generation with FT attention kernel
|
2023-01-03 22:10:31 -08:00 |
|
Tri Dao
|
be1afaa276
|
[Gen, FT] Use fp32 accum for FMA
|
2023-01-03 22:09:22 -08:00 |
|
Tri Dao
|
f266fc7262
|
[Gen, FT] Use tlength instead of params.timestep for rotary
|
2023-01-03 17:46:55 -08:00 |
|
Tri Dao
|
a01d1213d7
|
[Gen] Add kernel from FasterTransformer for benchmarking
|
2023-01-03 17:37:43 -08:00 |
|
Tri Dao
|
4cab4de5ea
|
[TP] Put parallel embeddings in separate modules
|
2023-01-02 08:47:48 -08:00 |
|
Tri Dao
|
1ec09ebd90
|
[FusedDense] Limit matrix dims to 2M (instead of 64k)
|
2023-01-01 17:06:39 -08:00 |
|
Tri Dao
|
714c1b4f0f
|
[Bert] Fix embedding layer norm before embedding dropout
|
2023-01-01 10:38:05 -08:00 |
|
Tri Dao
|
ef1ba918c6
|
[GPT] Refactor function to shard state_dict for TensorParallel
|
2023-01-01 00:09:33 -08:00 |
|
Tri Dao
|
65b4064b2a
|
[FusedDense] Kick off input all_gather before weight dtype conversion
|
2022-12-31 22:47:34 -08:00 |
|
Tri Dao
|
71befc19e1
|
[Loss] Use flash_attn.losses.cross_entropy.CrossEntropyLoss
|
2022-12-31 22:43:28 -08:00 |
|
Tri Dao
|
cadfa396b8
|
[Docker] Set torchmetrics==0.10.3
|
2022-12-30 02:42:28 -08:00 |
|
Tri Dao
|
43798966cf
|
[Docs] Fix formatting
|
2022-12-30 00:01:55 -08:00 |
|
Tri Dao
|
3c7cbfc195
|
[Docs] Mention that dropout_layer_norm supports all dims up to 6k
|
2022-12-29 23:55:33 -08:00 |
|
Tri Dao
|
85b8e3d334
|
[Docs] Mention that XPos's scale_base is recommended to be 512
|
2022-12-29 20:25:02 -08:00 |
|
Tri Dao
|
984d5204e2
|
Update training Dockerfile to use flash-attn==0.2.6
|
2022-12-29 15:12:33 -08:00 |
|
Tri Dao
|
029617179f
|
Merge pull request #95 from Quentin-Anthony/patch-1
Add gpt-neox adoption
|
2022-12-28 15:36:36 -08:00 |
|
Quentin Anthony
|
d2a69a55e2
|
Add gpt-neox adoption
|
2022-12-28 18:33:58 -05:00 |
|
Tri Dao
|
a6ec1782dc
|
Bump to v0.2.6
|
2022-12-27 22:05:20 -08:00 |
|
Tri Dao
|
63670fd84a
|
Implement generation for GPT
|
2022-12-27 21:01:50 -08:00 |
|
Tri Dao
|
9d797d8848
|
Support loading GPT2 weights from Huggingface
|
2022-12-27 11:22:48 -08:00 |
|
Tri Dao
|
c6ecd40a59
|
Tweak CrossEntropyLoss to take process_group in init
|
2022-12-27 10:47:43 -08:00 |
|
Tri Dao
|
b4018a5028
|
Implement Tensor Parallel for GPT model
|
2022-12-26 16:22:43 -08:00 |
|
Tri Dao
|
78225c5366
|
Implement Tensor Parallel for GPT2Embeddings
|
2022-12-25 14:29:53 -08:00 |
|
Tri Dao
|
a8cfe51551
|
Implement Tensor Parallel for transformer Block
|
2022-12-25 14:08:21 -08:00 |
|
Tri Dao
|
1e712ea8b0
|
Implement TensorParallel for MHA
|
2022-12-25 11:39:55 -08:00 |
|
Tri Dao
|
226a1b721d
|
Implement TensorParallel for FusedDense and FusedDenseGeluDense
|
2022-12-24 11:48:56 -08:00 |
|