flash-attention/csrc
2023-04-20 17:03:13 -07:00
..
flash_attn only 1 thread writes to global mem in fprop 2023-04-15 06:09:41 +00:00
ft_attention [Gen] Fix FT kernel smem size, CG when batch size changed 2023-04-20 17:03:13 -07:00
fused_dense_lib [FusedDense] Set workspace size to 32M for Hopper and 4M for others 2023-04-06 23:40:15 -07:00
fused_softmax Add Megatron attention implementation for benchmarking 2022-10-23 23:04:16 -07:00
layer_norm [LayerNorm] Implement LN with parallel residual, support dim 8k 2023-03-31 14:23:45 -07:00
rotary Support H100 for other CUDA extensions 2023-03-15 16:59:27 -07:00
xentropy Support H100 for other CUDA extensions 2023-03-15 16:59:27 -07:00