flash-attention/csrc/flash_attn
Tri Dao b17c6fe235 Reduce smem usage for Q and dO in the backward pass
From 4KB per buffer to 2KB per buffer. This saves us 8KB of smem (each Q and dO
have 2 buffers)
2022-06-03 16:59:11 -07:00
..
cutlass@319a389f42 Add Cutlass as submodule 2022-06-02 09:54:16 -07:00
src Reduce smem usage for Q and dO in the backward pass 2022-06-03 16:59:11 -07:00
fmha_api.cpp Support Turing mma instructions 2022-06-03 16:58:44 -07:00