Tri Dao
|
e6a8026489
|
[Gen] Rename max_sequence_len->max_seqlen, sequence_len_offset->seqlen_offset
|
2023-09-19 22:20:22 -07:00 |
|
Tri Dao
|
dfe29f5e2b
|
[Gen] Don't use ft_attention, use flash_attn_with_kvcache instead
|
2023-09-18 15:29:06 -07:00 |
|
Tri Dao
|
8a733cbd53
|
[Gen] Fix calling update_graph_cache in tests
|
2023-09-10 17:22:37 -07:00 |
|
Tri Dao
|
a86442f0f3
|
[Gen] Use flash_attn_with_kvcache in generation
|
2023-09-07 08:24:43 -07:00 |
|
Tri Dao
|
798858f9f1
|
Fix test_baichuan
|
2023-09-03 21:01:37 -07:00 |
|
Tri Dao
|
de2949f37d
|
[Rotary] Pass max_seqlen from mha.py to rotary during inference
|
2023-09-03 11:37:06 -07:00 |
|
dan_the_3rd
|
011ec323d6
|
Support MQA + MP for decoding (#490)
Co-authored-by: danthe3rd <danthe3rd>
|
2023-08-30 10:29:54 -07:00 |
|
Tri Dao
|
a2974e850a
|
Change causal for CrossAttention in mha.py to align to bottom right
|
2023-08-26 12:57:33 -07:00 |
|
Tri Dao
|
f1a73d0740
|
Run isort and black on python files
|
2023-08-18 14:22:11 -07:00 |
|
Xuechen Li
|
bb4cded17b
|
support when num_heads is not divisible by world_size; resolves #459 (#461)
* uneql rank.
* trim.
* enable passing in number of heads for each rank.
* simplify.
* simplify.
* cleanup.
* fix col parallel.
* fix bug with row parallel.
* fit out proj.
* refac.
* fix sharding logic.
* refac sharding.
* refac.
* support multiple of.
* make fn reuseable.
* fix bug in dimensions.
* scaffold.
* test uneven heads.
* fix test by adding barrier.
* refac.
* reuse code.
* clean up.
|
2023-08-18 14:10:35 -07:00 |
|
Tri Dao
|
bec5b3d374
|
[MHA] Run black on mha.py
|
2023-08-16 23:47:13 -07:00 |
|
Tri Dao
|
425dbcb6c6
|
[MHA] Implement MQA/GQA
|
2023-07-23 00:06:58 -07:00 |
|
Tri Dao
|
4f285b3547
|
FlashAttention-2 release
|
2023-07-17 06:21:34 -07:00 |
|
Tri Dao
|
62e9814466
|
[Rotary] Make sure frequency calculation is in fp32
|
2023-07-02 16:39:39 -07:00 |
|
Tri Dao
|
48bc6eacd6
|
[Gen] Add rotary base as an argument to FT attention kernel
|
2023-05-30 13:38:34 -07:00 |
|
Tri Dao
|
ba2fe7f378
|
[Gen] Move allocate_inference_cache to within the model
|
2023-04-20 18:15:12 -07:00 |
|
Tri Dao
|
311d6606bf
|
[Gen] Fix FT kernel smem size, CG when batch size changed
|
2023-04-20 17:03:13 -07:00 |
|
Tri Dao
|
ac3b684cdb
|
Have a separate nn.Dropout module in SelfAttention module
|
2023-04-17 22:34:05 -07:00 |
|
Tri Dao
|
605655bc66
|
[Gen] Fix FT kernel when using CG
|
2023-04-14 16:50:01 -07:00 |
|
Tri Dao
|
f5d0fbd468
|
[FT] Fix FT's single query attention for bf16 hdim128 rotary
|
2023-03-28 21:27:00 -07:00 |
|
Tri Dao
|
4d87e4d875
|
Implement GPT-J
|
2023-03-22 16:16:58 -07:00 |
|
Tri Dao
|
780e8eeabb
|
[ViT] Support timm checkpoint, add tests
|
2023-01-16 01:20:34 -08:00 |
|
Tri Dao
|
7c2191542a
|
[Gen] Make generation work with Tensor Parallel
|
2023-01-15 11:34:27 -08:00 |
|
Tri Dao
|
0938298e4c
|
[Gen] Adjust shape of kv_cache when using FT
|
2023-01-07 17:27:54 -08:00 |
|
Tri Dao
|
11be742aa3
|
[Gen] Test generation with rotary embedding
|
2023-01-07 14:37:54 -08:00 |
|
Tri Dao
|
8d9674ed08
|
Merge pull request #102 from Lamikins/main
fixed cross attention typeerror
|
2023-01-07 13:56:20 -08:00 |
|
Tri Dao
|
93383bd55b
|
[TP] Implement TensorParallel without sequence parallel
|
2023-01-07 13:45:22 -08:00 |
|
Darius Lam
|
aec35fd67c
|
fixed cross attention typeerror
|
2023-01-07 12:58:41 -08:00 |
|
Tri Dao
|
a668890fcd
|
[Gen] Add option to run generation with FT attention kernel
|
2023-01-03 22:10:31 -08:00 |
|
Tri Dao
|
65b4064b2a
|
[FusedDense] Kick off input all_gather before weight dtype conversion
|
2022-12-31 22:47:34 -08:00 |
|
Tri Dao
|
a6ec1782dc
|
Bump to v0.2.6
|
2022-12-27 22:05:20 -08:00 |
|
Tri Dao
|
63670fd84a
|
Implement generation for GPT
|
2022-12-27 21:01:50 -08:00 |
|
Tri Dao
|
1e712ea8b0
|
Implement TensorParallel for MHA
|
2022-12-25 11:39:55 -08:00 |
|
Tri Dao
|
e68ebbe89a
|
Simplify FusedDense
|
2022-12-22 21:25:31 -08:00 |
|
Tri Dao
|
496e4f528c
|
Implement XPos (Sun et al.)
|
2022-12-21 14:17:58 -08:00 |
|
Tri Dao
|
13cdceb377
|
Implement last_layer_subset optimization for BERT
|
2022-12-19 22:18:46 -08:00 |
|
Tri Dao
|
5fb6df0e04
|
Implement BERT
|
2022-12-18 21:47:27 -08:00 |
|
Tri Dao
|
d4b320b31f
|
Add MLP, MHA, Block, Embedding modules
|
2022-11-13 22:06:44 -08:00 |
|