This website requires JavaScript.
Explore
Help
Register
Sign In
squall
/
flash-attention
Watch
1
Star
0
Fork
0
You've already forked flash-attention
Code
Issues
Pull Requests
Actions
Packages
Projects
Releases
Wiki
Activity
75e334d407
flash-attention
/
tests
History
Tri Dao
b3177dfaf6
[GPT] Enable FlashAttention for GPT-J
2023-07-21 17:29:10 -07:00
..
layers
[Rotary] Implement GPT-J style (interleaved) rotary
2023-03-14 14:35:53 -07:00
losses
Tweak CrossEntropyLoss to take process_group in init
2022-12-27 10:47:43 -08:00
models
[GPT] Enable FlashAttention for GPT-J
2023-07-21 17:29:10 -07:00
modules
[FusedDense] Support relu, rename FusedDenseGeluDense -> FusedMLP
2023-01-17 18:12:27 -08:00
ops
[LayerNorm] Make sure memory addresses are aligned to 16 bytes
2023-07-04 14:53:12 -07:00
test_flash_attn.py
FlashAttention-2 release
2023-07-17 06:21:34 -07:00
test_rotary.py
Add MLP, MHA, Block, Embedding modules
2022-11-13 22:06:44 -08:00