This website requires JavaScript.
Explore
Help
Register
Sign In
squall
/
flash-attention
Watch
1
Star
0
Fork
0
You've already forked flash-attention
Code
Issues
Pull Requests
Actions
Packages
Projects
Releases
Wiki
Activity
6f706eff96
flash-attention
/
csrc
History
Tri Dao
6f706eff96
Make Softmax an object
2024-01-19 16:09:31 -08:00
..
cutlass
@
a75b4ac483
Update cutlass to v3.3.0
2023-12-21 23:25:50 -08:00
flash_attn
Make Softmax an object
2024-01-19 16:09:31 -08:00
ft_attention
[Gen] Don't use ft_attention, use flash_attn_with_kvcache instead
2023-09-18 15:29:06 -07:00
fused_dense_lib
[FusedDense] Allocate lt_workspace on input device
2023-05-30 14:17:26 -07:00
fused_softmax
Add Megatron attention implementation for benchmarking
2022-10-23 23:04:16 -07:00
layer_norm
[LayerNorm] Switch from CUDA to Triton implementation
2024-01-05 00:31:17 -08:00
rotary
Support H100 for other CUDA extensions
2023-03-15 16:59:27 -07:00
xentropy
[CE] Implement CrossEntropyLoss in Triton
2023-09-15 20:05:28 -07:00