flash-attention/csrc
Jeremy Reizenstein ce3e7280f8
Allow varlen_fwd to take optional seqused_k (#647)
Co-authored-by: bottler <bottler@users.noreply.github.com>
2023-11-27 00:41:23 -08:00
..
cutlass@44c704eae8 Update cutlass to 3.2.2 2023-11-19 21:43:48 -08:00
flash_attn Allow varlen_fwd to take optional seqused_k (#647) 2023-11-27 00:41:23 -08:00
ft_attention [Gen] Don't use ft_attention, use flash_attn_with_kvcache instead 2023-09-18 15:29:06 -07:00
fused_dense_lib [FusedDense] Allocate lt_workspace on input device 2023-05-30 14:17:26 -07:00
fused_softmax Add Megatron attention implementation for benchmarking 2022-10-23 23:04:16 -07:00
layer_norm Fix random state for dropout_layer_norm (#315) 2023-07-23 15:05:13 -07:00
rotary Support H100 for other CUDA extensions 2023-03-15 16:59:27 -07:00
xentropy [CE] Implement CrossEntropyLoss in Triton 2023-09-15 20:05:28 -07:00