This website requires JavaScript.
Explore
Help
Register
Sign In
squall
/
flash-attention
Watch
1
Star
0
Fork
0
You've already forked flash-attention
Code
Issues
Pull Requests
Actions
Packages
Projects
Releases
Wiki
Activity
605655bc66
flash-attention
/
csrc
History
Tri Dao
dec4f2e910
[FusedDense] Set workspace size to 32M for Hopper and 4M for others
2023-04-06 23:40:15 -07:00
..
flash_attn
Support H100
2023-03-15 14:59:02 -07:00
ft_attention
[FT] Fix FT's single query attention for bf16 hdim128 rotary
2023-03-28 21:27:00 -07:00
fused_dense_lib
[FusedDense] Set workspace size to 32M for Hopper and 4M for others
2023-04-06 23:40:15 -07:00
fused_softmax
Add Megatron attention implementation for benchmarking
2022-10-23 23:04:16 -07:00
layer_norm
[LayerNorm] Implement LN with parallel residual, support dim 8k
2023-03-31 14:23:45 -07:00
rotary
Support H100 for other CUDA extensions
2023-03-15 16:59:27 -07:00
xentropy
Support H100 for other CUDA extensions
2023-03-15 16:59:27 -07:00