From 43ab0b5205decb1f8a8e499d03de529703700cb4 Mon Sep 17 00:00:00 2001 From: Tri Dao Date: Tue, 15 Nov 2022 07:10:25 -0800 Subject: [PATCH] Mention that some CUDA extensions have only been tested on A100s --- csrc/fused_dense_lib/README.md | 3 +++ csrc/layer_norm/README.md | 3 +++ csrc/xentropy/README.md | 3 +++ 3 files changed, 9 insertions(+) diff --git a/csrc/fused_dense_lib/README.md b/csrc/fused_dense_lib/README.md index 439a7c7..d0b3968 100644 --- a/csrc/fused_dense_lib/README.md +++ b/csrc/fused_dense_lib/README.md @@ -5,6 +5,9 @@ We make it work for bfloat16. For best performance, you should use CUDA >= 11.8. CuBLAS versions before this doesn't have the best matmul + bias + gelu performance for bfloat16. + +It has only been tested on A100s. + ```sh cd csrc/fused_dense_lib && pip install . ``` diff --git a/csrc/layer_norm/README.md b/csrc/layer_norm/README.md index 69356a5..c5cd8ad 100644 --- a/csrc/layer_norm/README.md +++ b/csrc/layer_norm/README.md @@ -1,6 +1,9 @@ This CUDA extension implements fused dropout + residual + LayerNorm, based on Apex's [FastLayerNorm](https://github.com/NVIDIA/apex/tree/master/apex/contrib/layer_norm). We add dropout and residual, and make it work for both pre-norm and post-norm architecture. + +It has only been tested on A100s. + ```sh cd csrc/layer_norm && pip install . ``` diff --git a/csrc/xentropy/README.md b/csrc/xentropy/README.md index 45be7de..7970f39 100644 --- a/csrc/xentropy/README.md +++ b/csrc/xentropy/README.md @@ -1,6 +1,9 @@ This CUDA extension implements optimized cross-entropy loss, adapted from Apex's [Xentropy](https://github.com/NVIDIA/apex/tree/master/apex/contrib/xentropy). We make it work for bfloat16 and support in-place backward to save memory. + +It has only been tested on A100s. + ```sh cd csrc/xentropy && pip install . ```