This website requires JavaScript.
Explore
Help
Register
Sign In
squall
/
flash-attention
Watch
1
Star
0
Fork
0
You've already forked flash-attention
Code
Issues
Pull Requests
Actions
Packages
Projects
Releases
Wiki
Activity
311d6606bf
flash-attention
/
flash_attn
/
modules
History
Tri Dao
311d6606bf
[Gen] Fix FT kernel smem size, CG when batch size changed
2023-04-20 17:03:13 -07:00
..
__init__.py
Add __init__.py files to subdirectories for installation
2022-11-17 16:55:44 -08:00
block.py
Implement LLaMa
2023-04-18 21:51:35 -07:00
embedding.py
Reorder LN in Block, support OPT
2023-01-15 22:14:31 -08:00
mha.py
[Gen] Fix FT kernel smem size, CG when batch size changed
2023-04-20 17:03:13 -07:00
mlp.py
Implement LLaMa
2023-04-18 21:51:35 -07:00