This website requires JavaScript.
Explore
Help
Register
Sign In
squall
/
flash-attention
Watch
1
Star
0
Fork
0
You've already forked flash-attention
Code
Issues
Pull Requests
Actions
Packages
Projects
Releases
Wiki
Activity
52fb4b729b
flash-attention
/
csrc
/
flash_attn
History
Tri Dao
52fb4b729b
Fix
#54
: set device for multi-GPU case
2022-10-16 12:51:26 -07:00
..
cutlass
@
319a389f42
Add Cutlass as submodule
2022-06-02 09:54:16 -07:00
src
Implement attention kernel that splits the batch into two
2022-10-13 20:49:02 -07:00
fmha_api.cpp
Fix
#54
: set device for multi-GPU case
2022-10-16 12:51:26 -07:00