This website requires JavaScript.
Explore
Help
Register
Sign In
squall
/
flash-attention
Watch
1
Star
0
Fork
0
You've already forked flash-attention
Code
Issues
Pull Requests
Actions
Packages
Projects
Releases
Wiki
Activity
6d673cd961
flash-attention
/
csrc
History
Tri Dao
37c6e05406
Implement flash_attn_with_kvcache
2023-09-04 00:11:44 -07:00
..
cutlass
@
34fd98056b
Remove constexpr in launch template to fix CI compilation
2023-09-03 22:59:41 -07:00
flash_attn
Implement flash_attn_with_kvcache
2023-09-04 00:11:44 -07:00
ft_attention
[ft_attention] Fix for seqlen=8136 (
#488
)
2023-08-28 10:00:22 -07:00
fused_dense_lib
[FusedDense] Allocate lt_workspace on input device
2023-05-30 14:17:26 -07:00
fused_softmax
Add Megatron attention implementation for benchmarking
2022-10-23 23:04:16 -07:00
layer_norm
Fix random state for dropout_layer_norm (
#315
)
2023-07-23 15:05:13 -07:00
rotary
Support H100 for other CUDA extensions
2023-03-15 16:59:27 -07:00
xentropy
Support H100 for other CUDA extensions
2023-03-15 16:59:27 -07:00