This website requires JavaScript.
Explore
Help
Register
Sign In
squall
/
flash-attention
Watch
1
Star
0
Fork
0
You've already forked flash-attention
Code
Issues
Pull Requests
Actions
Packages
Projects
Releases
Wiki
Activity
2dc2a19589
flash-attention
/
tests
History
Tri Dao
78b7a1dc18
[OPT] Load fp16 weights on CPU before moving to GPU
2023-01-22 17:01:32 -08:00
..
losses
Tweak CrossEntropyLoss to take process_group in init
2022-12-27 10:47:43 -08:00
models
[OPT] Load fp16 weights on CPU before moving to GPU
2023-01-22 17:01:32 -08:00
modules
[FusedDense] Support relu, rename FusedDenseGeluDense -> FusedMLP
2023-01-17 18:12:27 -08:00
ops
[FusedDense] Support relu, rename FusedDenseGeluDense -> FusedMLP
2023-01-17 18:12:27 -08:00
test_flash_attn.py
Skip flash_attn_split test
2022-11-13 12:27:48 -08:00
test_rotary.py
Add MLP, MHA, Block, Embedding modules
2022-11-13 22:06:44 -08:00