flash-attention/csrc
2023-09-18 15:29:06 -07:00
..
cutlass@34fd98056b Remove constexpr in launch template to fix CI compilation 2023-09-03 22:59:41 -07:00
flash_attn Swap seqlen_q, nheads for MQA when seqlen_q=1 for fwd (h/t Daniel H) 2023-09-18 14:52:16 -07:00
ft_attention [Gen] Don't use ft_attention, use flash_attn_with_kvcache instead 2023-09-18 15:29:06 -07:00
fused_dense_lib [FusedDense] Allocate lt_workspace on input device 2023-05-30 14:17:26 -07:00
fused_softmax Add Megatron attention implementation for benchmarking 2022-10-23 23:04:16 -07:00
layer_norm Fix random state for dropout_layer_norm (#315) 2023-07-23 15:05:13 -07:00
rotary Support H100 for other CUDA extensions 2023-03-15 16:59:27 -07:00
xentropy [CE] Implement CrossEntropyLoss in Triton 2023-09-15 20:05:28 -07:00