diff --git a/csrc/fused_dense_lib/README.md b/csrc/fused_dense_lib/README.md index 439a7c7..d0b3968 100644 --- a/csrc/fused_dense_lib/README.md +++ b/csrc/fused_dense_lib/README.md @@ -5,6 +5,9 @@ We make it work for bfloat16. For best performance, you should use CUDA >= 11.8. CuBLAS versions before this doesn't have the best matmul + bias + gelu performance for bfloat16. + +It has only been tested on A100s. + ```sh cd csrc/fused_dense_lib && pip install . ``` diff --git a/csrc/layer_norm/README.md b/csrc/layer_norm/README.md index 69356a5..c5cd8ad 100644 --- a/csrc/layer_norm/README.md +++ b/csrc/layer_norm/README.md @@ -1,6 +1,9 @@ This CUDA extension implements fused dropout + residual + LayerNorm, based on Apex's [FastLayerNorm](https://github.com/NVIDIA/apex/tree/master/apex/contrib/layer_norm). We add dropout and residual, and make it work for both pre-norm and post-norm architecture. + +It has only been tested on A100s. + ```sh cd csrc/layer_norm && pip install . ``` diff --git a/csrc/xentropy/README.md b/csrc/xentropy/README.md index 45be7de..7970f39 100644 --- a/csrc/xentropy/README.md +++ b/csrc/xentropy/README.md @@ -1,6 +1,9 @@ This CUDA extension implements optimized cross-entropy loss, adapted from Apex's [Xentropy](https://github.com/NVIDIA/apex/tree/master/apex/contrib/xentropy). We make it work for bfloat16 and support in-place backward to save memory. + +It has only been tested on A100s. + ```sh cd csrc/xentropy && pip install . ```