This website requires JavaScript.
Explore
Help
Register
Sign In
squall
/
flash-attention
Watch
1
Star
0
Fork
0
You've already forked flash-attention
Code
Issues
Pull Requests
Actions
Packages
Projects
Releases
Wiki
Activity
ca81f32e04
flash-attention
/
csrc
History
Tri Dao
ca81f32e04
Implement rotary embedding in CUDA
2022-11-04 22:42:01 -07:00
..
flash_attn
Get rid of o_rows_are_valid since we don't have headdim=16 anymore
2022-10-24 17:29:36 -07:00
fused_softmax
Add Megatron attention implementation for benchmarking
2022-10-23 23:04:16 -07:00
rotary
Implement rotary embedding in CUDA
2022-11-04 22:42:01 -07:00