This website requires JavaScript.
Explore
Help
Register
Sign In
squall
/
flash-attention
Watch
1
Star
0
Fork
0
You've already forked flash-attention
Code
Issues
Pull Requests
Actions
Packages
Projects
Releases
Wiki
Activity
f1e01c27ba
flash-attention
/
tests
History
Tri Dao
f1e01c27ba
[Gen] Pass qkv_stride to ft_attention kernel for batched generation
2023-01-15 15:20:01 -08:00
..
losses
Tweak CrossEntropyLoss to take process_group in init
2022-12-27 10:47:43 -08:00
models
[Gen] Pass qkv_stride to ft_attention kernel for batched generation
2023-01-15 15:20:01 -08:00
modules
[TP] Implement TensorParallel without sequence parallel
2023-01-07 13:45:22 -08:00
ops
[TP] Implement TensorParallel without sequence parallel
2023-01-07 13:45:22 -08:00
test_flash_attn.py
Skip flash_attn_split test
2022-11-13 12:27:48 -08:00
test_rotary.py
Add MLP, MHA, Block, Embedding modules
2022-11-13 22:06:44 -08:00