cutlass/test/unit/gemm
Jongsoo Park 3cfa5db2a2
Actually use float accumulation in gemm_f16t_f16t_f16t_wmma_tensor_op… (#407)
* Actually use float accumulation in gemm_f16t_f16t_f16t_wmma_tensor_op_f32_sm70.cu

As title

* Update gemm_f16t_f16t_f16t_wmma_tensor_op_f32_sm70.cu

change the missing one

Co-authored-by: Haicheng Wu <57973641+hwu36@users.noreply.github.com>
2022-02-16 09:53:21 -05:00
..
device Actually use float accumulation in gemm_f16t_f16t_f16t_wmma_tensor_op… (#407) 2022-02-16 09:53:21 -05:00
kernel Cutlass 2.6 Update 1 (#301) 2021-07-27 17:58:30 -07:00
thread Cutlass 2.6 Update 1 (#301) 2021-07-27 17:58:30 -07:00
threadblock Cutlass 2.6 Update 1 (#301) 2021-07-27 17:58:30 -07:00
warp CUTLASS 2.8 (#363) 2021-11-19 13:26:35 -08:00
CMakeLists.txt Cutlass 2.6 Update 1 (#301) 2021-07-27 17:58:30 -07:00