This website requires JavaScript.
Explore
Help
Register
Sign In
squall
/
flash-attention
Watch
1
Star
0
Fork
0
You've already forked flash-attention
Code
Issues
Pull Requests
Actions
Packages
Projects
Releases
Wiki
Activity
75e334d407
flash-attention
/
flash_attn
/
ops
History
Tri Dao
d2f4324f4c
[LayerNorm] Make sure memory addresses are aligned to 16 bytes
2023-07-04 14:53:12 -07:00
..
triton
Implement LLaMa
2023-04-18 21:51:35 -07:00
__init__.py
Add __init__.py files to subdirectories for installation
2022-11-17 16:55:44 -08:00
activations.py
[FusedDense] Enable sqrelu activation in FusedMLP
2023-04-13 15:29:32 -07:00
fused_dense.py
Implement GatedMlp
2023-04-18 03:37:14 -07:00
layer_norm.py
[LayerNorm] Make sure memory addresses are aligned to 16 bytes
2023-07-04 14:53:12 -07:00
rms_norm.py
Implement LLaMa
2023-04-18 21:51:35 -07:00