This website requires JavaScript.
Explore
Help
Register
Sign In
squall
/
flash-attention
Watch
1
Star
0
Fork
0
You've already forked flash-attention
Code
Issues
Pull Requests
Actions
Packages
Projects
Releases
Wiki
Activity
5d079fdd7a
flash-attention
/
flash_attn
/
ops
History
Tri Dao
dc08ea1c33
Support H100 for other CUDA extensions
2023-03-15 16:59:27 -07:00
..
triton
Add GPT and ViT models
2022-11-13 22:30:23 -08:00
__init__.py
Add __init__.py files to subdirectories for installation
2022-11-17 16:55:44 -08:00
fused_dense.py
Support H100 for other CUDA extensions
2023-03-15 16:59:27 -07:00
gelu_activation.py
Add fused_dense and dropout_add_layernorm CUDA extensions
2022-11-13 21:59:20 -08:00
layer_norm.py
[LayerNorm] Rename x1 -> residual
2023-01-19 13:07:27 -08:00
rms_norm.py
[LayerNorm] Rename x1 -> residual
2023-01-19 13:07:27 -08:00