This website requires JavaScript.
Explore
Help
Register
Sign In
squall
/
flash-attention
Watch
1
Star
0
Fork
0
You've already forked flash-attention
Code
Issues
Pull Requests
Actions
Packages
Projects
Releases
Wiki
Activity
cd0c169eee
flash-attention
/
csrc
History
Tri Dao
27f8f890df
[FusedDense] Allocate lt_workspace on input device
2023-05-30 14:17:26 -07:00
..
flash_attn
[Docs] Clearer error message for bwd d > 64, bump to v1.0.4
2023-04-26 09:19:48 -07:00
ft_attention
[Gen] Add rotary base as an argument to FT attention kernel
2023-05-30 13:38:34 -07:00
fused_dense_lib
[FusedDense] Allocate lt_workspace on input device
2023-05-30 14:17:26 -07:00
fused_softmax
Add Megatron attention implementation for benchmarking
2022-10-23 23:04:16 -07:00
layer_norm
[LayerNorm] Implement LN with parallel residual, support dim 8k
2023-03-31 14:23:45 -07:00
rotary
Support H100 for other CUDA extensions
2023-03-15 16:59:27 -07:00
xentropy
Support H100 for other CUDA extensions
2023-03-15 16:59:27 -07:00