This website requires JavaScript.
Explore
Help
Register
Sign In
squall
/
flash-attention
Watch
1
Star
0
Fork
0
You've already forked flash-attention
Code
Issues
Pull Requests
Actions
Packages
Projects
Releases
Wiki
Activity
4360cfc6a8
flash-attention
/
flash_attn
/
models
History
Tri Dao
78b7a1dc18
[OPT] Load fp16 weights on CPU before moving to GPU
2023-01-22 17:01:32 -08:00
..
__init__.py
Add __init__.py files to subdirectories for installation
2022-11-17 16:55:44 -08:00
bert.py
[FusedDense] Support relu, rename FusedDenseGeluDense -> FusedMLP
2023-01-17 18:12:27 -08:00
gpt.py
[OPT] Load fp16 weights on CPU before moving to GPU
2023-01-22 17:01:32 -08:00
opt.py
[OPT] Load fp16 weights on CPU before moving to GPU
2023-01-22 17:01:32 -08:00
vit.py
[FusedDense] Support relu, rename FusedDenseGeluDense -> FusedMLP
2023-01-17 18:12:27 -08:00