flash-attention/csrc/fused_dense_lib/README.md

11 lines
455 B
Markdown
Raw Normal View History

2022-11-14 14:13:44 +08:00
This CUDA extension implements fused matmul + bias (forward and backward), and fused matmul + bias + gelu
(forward and backward), adapted from Apex's
[FusedDense](https://github.com/NVIDIA/apex/tree/master/apex/fused_dense).
We make it work for bfloat16.
For best performance, you should use CUDA >= 11.8. CuBLAS versions before
this doesn't have the best matmul + bias + gelu performance for bfloat16.
```sh
cd csrc/fused_dense_lib && pip install .
```