Mention that some CUDA extensions have only been tested on A100s
This commit is contained in:
parent
e4d3013e15
commit
43ab0b5205
@ -5,6 +5,9 @@ We make it work for bfloat16.
|
||||
|
||||
For best performance, you should use CUDA >= 11.8. CuBLAS versions before
|
||||
this doesn't have the best matmul + bias + gelu performance for bfloat16.
|
||||
|
||||
It has only been tested on A100s.
|
||||
|
||||
```sh
|
||||
cd csrc/fused_dense_lib && pip install .
|
||||
```
|
||||
|
||||
@ -1,6 +1,9 @@
|
||||
This CUDA extension implements fused dropout + residual + LayerNorm, based on
|
||||
Apex's [FastLayerNorm](https://github.com/NVIDIA/apex/tree/master/apex/contrib/layer_norm).
|
||||
We add dropout and residual, and make it work for both pre-norm and post-norm architecture.
|
||||
|
||||
It has only been tested on A100s.
|
||||
|
||||
```sh
|
||||
cd csrc/layer_norm && pip install .
|
||||
```
|
||||
|
||||
@ -1,6 +1,9 @@
|
||||
This CUDA extension implements optimized cross-entropy loss, adapted from Apex's
|
||||
[Xentropy](https://github.com/NVIDIA/apex/tree/master/apex/contrib/xentropy).
|
||||
We make it work for bfloat16 and support in-place backward to save memory.
|
||||
|
||||
It has only been tested on A100s.
|
||||
|
||||
```sh
|
||||
cd csrc/xentropy && pip install .
|
||||
```
|
||||
|
||||
Loading…
Reference in New Issue
Block a user