From beae168f90aa3887274ed6af9969e903c797c595 Mon Sep 17 00:00:00 2001 From: Yujia Zhai Date: Tue, 6 Sep 2022 13:32:44 -0700 Subject: [PATCH] fix broken link (#620) Co-authored-by: yuzhai --- CHANGELOG.md | 2 +- README.md | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/CHANGELOG.md b/CHANGELOG.md index 2418c845..cecffb5e 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -4,7 +4,7 @@ * [Grouped convolution targeting implicit GEMM](test/unit/conv/device/conv2d_fprop_implicit_gemm_f16nhwc_f16nhwc_f32nhwc_tensor_op_f32_sm80.cu) * [Depthwise separable convolution](test/unit/conv/device/depthwise_fprop_implicit_gemm_f16nhwc_f16nhwc_f16nhwc_simt_f16_sm60.cu) * Optimizations for CUTLASS's [Grouped GEMM](examples/24_gemm_grouped/gemm_grouped.cu) kernel -* [Grouped GEMM for Multihead Attention](examples/50_multi_head_attention) +* [Grouped GEMM for Multihead Attention](examples/41_multi_head_attention) * [GEMM + Layer norm fusion for Ampere](examples/37_gemm_layernorm_gemm_fusion/) * Updates and bugfixes from the community (thanks!) diff --git a/README.md b/README.md index c8e29f40..e884c735 100644 --- a/README.md +++ b/README.md @@ -42,7 +42,7 @@ CUTLASS 2.10 is an update to CUTLASS adding: - [Grouped convolution targeting implicit GEMM](test/unit/conv/device/conv2d_fprop_implicit_gemm_f16nhwc_f16nhwc_f32nhwc_tensor_op_f32_sm80.cu) - [Depthwise separable convolution](test/unit/conv/device/depthwise_fprop_implicit_gemm_f16nhwc_f16nhwc_f16nhwc_simt_f16_sm60.cu) - Optimizations for CUTLASS's [Grouped GEMM](examples/24_gemm_grouped/gemm_grouped.cu) kernel -- [Grouped GEMM for Multihead Attention](examples/50_multi_head_attention) +- [Grouped GEMM for Multihead Attention](examples/41_multi_head_attention) - [GEMM + Layer norm fusion for Ampere](examples/37_gemm_layernorm_gemm_fusion/) - Updates and bugfixes from the community (thanks!) - **Deprecation announcement:** CUTLASS plans to deprecate the following: