flash-attention/csrc/fused_dense_lib/README.md
2022-11-13 22:30:23 -08:00

11 lines
455 B
Markdown

This CUDA extension implements fused matmul + bias (forward and backward), and fused matmul + bias + gelu
(forward and backward), adapted from Apex's
[FusedDense](https://github.com/NVIDIA/apex/tree/master/apex/fused_dense).
We make it work for bfloat16.
For best performance, you should use CUDA >= 11.8. CuBLAS versions before
this doesn't have the best matmul + bias + gelu performance for bfloat16.
```sh
cd csrc/fused_dense_lib && pip install .
```