style(examples): typo (#1080)

* Update ampere_tensorop_conv2dfprop.cu

learning cutlass, PR a typo.

* Update ampere_gemm_operand_reduction_fusion.cu
This commit is contained in:
tpoisonooo 2023-09-11 22:13:22 +08:00 committed by GitHub
parent 34bbadd3ff
commit a77b2c9cb8
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
2 changed files with 2 additions and 2 deletions

View File

@ -53,7 +53,7 @@ can be used to form warp tiles (the tile shape each warp computes),
and multiple warp tiles can be used to compute threadblock tiles
(the tile shape computed by a threadblock).
In thie example, we split variable initialization into two parts.
In this example, we split variable initialization into two parts.
1. Setting up data properties: describes how tensors are laid out in the memory
and how the kernel can view them (logical to physical mapping)

View File

@ -30,7 +30,7 @@
**************************************************************************************************/
/**
The example demenstrates how to reduce one of the operands of the GEMM along the k-dimension when
The example demonstrates how to reduce one of the operands of the GEMM along the k-dimension when
computing GEMM. So the output also contains either a Mx1 or 1XN vector. It only works with Ampere
16x8x16 FP16/BF16 tensor cores, though it is not difficult to apply to other Turing/Ampere tensor
core instructions.