This website requires JavaScript.
Explore
Help
Register
Sign In
squall
/
flash-attention
Watch
1
Star
0
Fork
0
You've already forked flash-attention
Code
Issues
Pull Requests
Actions
Packages
Projects
Releases
Wiki
Activity
226a1b721d
flash-attention
/
tests
History
Tri Dao
226a1b721d
Implement TensorParallel for FusedDense and FusedDenseGeluDense
2022-12-24 11:48:56 -08:00
..
losses
Add smoothing for CrossEntropyParallel, rename to CrossEntropyLoss
2022-12-23 14:51:08 -08:00
models
Implement last_layer_subset optimization for BERT
2022-12-19 22:18:46 -08:00
ops
Implement TensorParallel for FusedDense and FusedDenseGeluDense
2022-12-24 11:48:56 -08:00
test_flash_attn.py
Skip flash_attn_split test
2022-11-13 12:27:48 -08:00
test_rotary.py
Add MLP, MHA, Block, Embedding modules
2022-11-13 22:06:44 -08:00