This website requires JavaScript.
Explore
Help
Register
Sign In
squall
/
flash-attention
Watch
1
Star
0
Fork
0
You've already forked flash-attention
Code
Issues
Pull Requests
Actions
Packages
Projects
Releases
Wiki
Activity
2e33fc8e36
flash-attention
/
csrc
History
Tri Dao
2e33fc8e36
Add GPT and ViT models
2022-11-13 22:30:23 -08:00
..
flash_attn
Fix out-of-bound memory read
2022-11-09 09:34:14 -08:00
fused_dense_lib
Add GPT and ViT models
2022-11-13 22:30:23 -08:00
fused_softmax
Add Megatron attention implementation for benchmarking
2022-10-23 23:04:16 -07:00
layer_norm
Add GPT and ViT models
2022-11-13 22:30:23 -08:00
rotary
Implement rotary embedding in CUDA
2022-11-04 22:42:01 -07:00
xentropy
Add fused_dense and dropout_add_layernorm CUDA extensions
2022-11-13 21:59:20 -08:00