flash-attention/csrc/flash_attn
2022-10-16 12:51:26 -07:00
..
cutlass@319a389f42 Add Cutlass as submodule 2022-06-02 09:54:16 -07:00
src Implement attention kernel that splits the batch into two 2022-10-13 20:49:02 -07:00
fmha_api.cpp Fix #54: set device for multi-GPU case 2022-10-16 12:51:26 -07:00