flash-attention/csrc/fused_dense_lib
2022-12-24 11:48:56 -08:00
..
fused_dense_cuda.cu Simplify FusedDense 2022-12-22 21:25:31 -08:00
fused_dense.cpp Implement TensorParallel for FusedDense and FusedDenseGeluDense 2022-12-24 11:48:56 -08:00
README.md Mention that some CUDA extensions have only been tested on A100s 2022-11-15 07:10:25 -08:00
setup.py Add fused_dense and dropout_add_layernorm CUDA extensions 2022-11-13 21:59:20 -08:00

This CUDA extension implements fused matmul + bias (forward and backward), and fused matmul + bias + gelu (forward and backward), adapted from Apex's FusedDense. We make it work for bfloat16.

For best performance, you should use CUDA >= 11.8. CuBLAS versions before this doesn't have the best matmul + bias + gelu performance for bfloat16.

It has only been tested on A100s.

cd csrc/fused_dense_lib && pip install .