This website requires JavaScript.
Explore
Help
Register
Sign In
squall
/
flash-attention
Watch
1
Star
0
Fork
0
You've already forked flash-attention
Code
Issues
Pull Requests
Actions
Packages
Projects
Releases
Wiki
Activity
d1a3b52f17
flash-attention
/
csrc
History
Tri Dao
4f285b3547
FlashAttention-2 release
2023-07-17 06:21:34 -07:00
..
cutlass
@
c4f6b8c6bc
FlashAttention-2 release
2023-07-17 06:21:34 -07:00
flash_attn
FlashAttention-2 release
2023-07-17 06:21:34 -07:00
ft_attention
[FT] rotary_cos/sin should have batch_size dimension
2023-07-06 15:33:33 -07:00
fused_dense_lib
[FusedDense] Allocate lt_workspace on input device
2023-05-30 14:17:26 -07:00
fused_softmax
Add Megatron attention implementation for benchmarking
2022-10-23 23:04:16 -07:00
layer_norm
[LayerNorm] Implement LN with parallel residual, support dim 8k
2023-03-31 14:23:45 -07:00
rotary
Support H100 for other CUDA extensions
2023-03-15 16:59:27 -07:00
xentropy
Support H100 for other CUDA extensions
2023-03-15 16:59:27 -07:00