Tri Dao
|
e02fd588aa
|
[Gen] Implement top-k and top-p sampling
|
2023-01-07 17:00:02 -08:00 |
|
Tri Dao
|
11be742aa3
|
[Gen] Test generation with rotary embedding
|
2023-01-07 14:37:54 -08:00 |
|
Tri Dao
|
8d9674ed08
|
Merge pull request #102 from Lamikins/main
fixed cross attention typeerror
|
2023-01-07 13:56:20 -08:00 |
|
Tri Dao
|
93383bd55b
|
[TP] Implement TensorParallel without sequence parallel
|
2023-01-07 13:45:22 -08:00 |
|
Darius Lam
|
aec35fd67c
|
fixed cross attention typeerror
|
2023-01-07 12:58:41 -08:00 |
|
Tri Dao
|
ce26d3d73d
|
Bump to v0.2.7
|
2023-01-06 17:37:30 -08:00 |
|
Tri Dao
|
6738d9477d
|
[LayerNorm] Implement RMS Norm
|
2023-01-06 17:34:22 -08:00 |
|
Tri Dao
|
a1f49a2b92
|
[Compilation] Change BOOL_SWITCH to fix Windows compilation
Follow xFormers's DISTPATCH_BOOL. Haven't tested it on Windows.
|
2023-01-06 14:40:58 -08:00 |
|
Tri Dao
|
a668890fcd
|
[Gen] Add option to run generation with FT attention kernel
|
2023-01-03 22:10:31 -08:00 |
|
Tri Dao
|
be1afaa276
|
[Gen, FT] Use fp32 accum for FMA
|
2023-01-03 22:09:22 -08:00 |
|
Tri Dao
|
f266fc7262
|
[Gen, FT] Use tlength instead of params.timestep for rotary
|
2023-01-03 17:46:55 -08:00 |
|
Tri Dao
|
a01d1213d7
|
[Gen] Add kernel from FasterTransformer for benchmarking
|
2023-01-03 17:37:43 -08:00 |
|
Tri Dao
|
4cab4de5ea
|
[TP] Put parallel embeddings in separate modules
|
2023-01-02 08:47:48 -08:00 |
|
Tri Dao
|
1ec09ebd90
|
[FusedDense] Limit matrix dims to 2M (instead of 64k)
|
2023-01-01 17:06:39 -08:00 |
|
Tri Dao
|
714c1b4f0f
|
[Bert] Fix embedding layer norm before embedding dropout
|
2023-01-01 10:38:05 -08:00 |
|
Tri Dao
|
ef1ba918c6
|
[GPT] Refactor function to shard state_dict for TensorParallel
|
2023-01-01 00:09:33 -08:00 |
|
Tri Dao
|
65b4064b2a
|
[FusedDense] Kick off input all_gather before weight dtype conversion
|
2022-12-31 22:47:34 -08:00 |
|
Tri Dao
|
71befc19e1
|
[Loss] Use flash_attn.losses.cross_entropy.CrossEntropyLoss
|
2022-12-31 22:43:28 -08:00 |
|
Tri Dao
|
cadfa396b8
|
[Docker] Set torchmetrics==0.10.3
|
2022-12-30 02:42:28 -08:00 |
|
Tri Dao
|
43798966cf
|
[Docs] Fix formatting
|
2022-12-30 00:01:55 -08:00 |
|
Tri Dao
|
3c7cbfc195
|
[Docs] Mention that dropout_layer_norm supports all dims up to 6k
|
2022-12-29 23:55:33 -08:00 |
|
Tri Dao
|
85b8e3d334
|
[Docs] Mention that XPos's scale_base is recommended to be 512
|
2022-12-29 20:25:02 -08:00 |
|
Tri Dao
|
984d5204e2
|
Update training Dockerfile to use flash-attn==0.2.6
|
2022-12-29 15:12:33 -08:00 |
|
Tri Dao
|
029617179f
|
Merge pull request #95 from Quentin-Anthony/patch-1
Add gpt-neox adoption
|
2022-12-28 15:36:36 -08:00 |
|
Quentin Anthony
|
d2a69a55e2
|
Add gpt-neox adoption
|
2022-12-28 18:33:58 -05:00 |
|
Tri Dao
|
a6ec1782dc
|
Bump to v0.2.6
|
2022-12-27 22:05:20 -08:00 |
|
Tri Dao
|
63670fd84a
|
Implement generation for GPT
|
2022-12-27 21:01:50 -08:00 |
|
Tri Dao
|
9d797d8848
|
Support loading GPT2 weights from Huggingface
|
2022-12-27 11:22:48 -08:00 |
|
Tri Dao
|
c6ecd40a59
|
Tweak CrossEntropyLoss to take process_group in init
|
2022-12-27 10:47:43 -08:00 |
|
Caleb Thomas
|
c9a649805b
|
Add a simple tutorial to README.md
|
2022-12-27 14:13:59 +08:00 |
|
Tri Dao
|
b4018a5028
|
Implement Tensor Parallel for GPT model
|
2022-12-26 16:22:43 -08:00 |
|
Tri Dao
|
78225c5366
|
Implement Tensor Parallel for GPT2Embeddings
|
2022-12-25 14:29:53 -08:00 |
|
Tri Dao
|
a8cfe51551
|
Implement Tensor Parallel for transformer Block
|
2022-12-25 14:08:21 -08:00 |
|
Tri Dao
|
1e712ea8b0
|
Implement TensorParallel for MHA
|
2022-12-25 11:39:55 -08:00 |
|
Tri Dao
|
226a1b721d
|
Implement TensorParallel for FusedDense and FusedDenseGeluDense
|
2022-12-24 11:48:56 -08:00 |
|
Tri Dao
|
dff68c2b22
|
Add smoothing for CrossEntropyParallel, rename to CrossEntropyLoss
|
2022-12-23 14:51:08 -08:00 |
|
Tri Dao
|
e68ebbe89a
|
Simplify FusedDense
|
2022-12-22 21:25:31 -08:00 |
|
Tri Dao
|
1bc6e5b09c
|
Bump to v0.2.5
|
2022-12-21 14:33:18 -08:00 |
|
Tri Dao
|
496e4f528c
|
Implement XPos (Sun et al.)
|
2022-12-21 14:17:58 -08:00 |
|
Tri Dao
|
c2407dec96
|
Fix typo in config: train.gpu -> train.gpu_mem
|
2022-12-21 13:42:30 -08:00 |
|
Tri Dao
|
13cdceb377
|
Implement last_layer_subset optimization for BERT
|
2022-12-19 22:18:46 -08:00 |
|
Tri Dao
|
5fb6df0e04
|
Implement BERT
|
2022-12-18 21:47:27 -08:00 |
|
Tri Dao
|
dc24c22603
|
Merge pull request #92 from ploshkin/rm-shape-asserts
Fix slicing dimensions in rotary embeddings
|
2022-12-17 11:22:06 -08:00 |
|
Alexander Ploshkin
|
ee8984d2be
|
add asserts for sin shape
|
2022-12-17 13:34:57 +04:00 |
|
Alexander Ploshkin
|
c7c66976cc
|
fix slicing dimensions
|
2022-12-16 15:39:06 +04:00 |
|
Tri Dao
|
b78f5a392d
|
[Docs] Mention Megatron-LM
|
2022-12-15 19:49:04 -08:00 |
|
Tri Dao
|
ece8f05d09
|
[Docs] Mention PubMedGPT
|
2022-12-15 19:44:59 -08:00 |
|
Alexander Ploshkin
|
96656b9323
|
Remove redundant shape asserts in rotary embeddings
|
2022-12-15 18:13:21 +04:00 |
|
Tri Dao
|
04c4c6106e
|
Bump to v0.2.4
|
2022-12-14 14:49:26 -08:00 |
|
Tri Dao
|
6b5f271c6d
|
[Triton] Avoid einops repeat by using Tensor.expand
|
2022-12-14 14:48:41 -08:00 |
|