flash-attention/csrc/fused_dense_lib/README.md
2022-11-13 22:30:23 -08:00

455 B

This CUDA extension implements fused matmul + bias (forward and backward), and fused matmul + bias + gelu (forward and backward), adapted from Apex's FusedDense. We make it work for bfloat16.

For best performance, you should use CUDA >= 11.8. CuBLAS versions before this doesn't have the best matmul + bias + gelu performance for bfloat16.

cd csrc/fused_dense_lib && pip install .