This website requires JavaScript.
Explore
Help
Register
Sign In
squall
/
flash-attention
Watch
1
Star
0
Fork
0
You've already forked flash-attention
Code
Issues
Pull Requests
Actions
Packages
Projects
Releases
Wiki
Activity
dfe29f5e2b
flash-attention
/
flash_attn
/
modules
History
Tri Dao
dfe29f5e2b
[Gen] Don't use ft_attention, use flash_attn_with_kvcache instead
2023-09-18 15:29:06 -07:00
..
__init__.py
Add __init__.py files to subdirectories for installation
2022-11-17 16:55:44 -08:00
block.py
Run isort and black on python files
2023-08-18 14:22:11 -07:00
embedding.py
Run isort and black on python files
2023-08-18 14:22:11 -07:00
mha.py
[Gen] Don't use ft_attention, use flash_attn_with_kvcache instead
2023-09-18 15:29:06 -07:00
mlp.py
[MLP] Implement SwiGLU with torch jiterator
2023-09-04 15:43:53 -07:00