This website requires JavaScript.
Explore
Help
Register
Sign In
squall
/
flash-attention
Watch
1
Star
0
Fork
0
You've already forked flash-attention
Code
Issues
Pull Requests
Actions
Packages
Projects
Releases
Wiki
Activity
63670fd84a
flash-attention
/
tests
/
models
History
Tri Dao
63670fd84a
Implement generation for GPT
2022-12-27 21:01:50 -08:00
..
test_bert.py
Tweak CrossEntropyLoss to take process_group in init
2022-12-27 10:47:43 -08:00
test_gpt_generation.py
Implement generation for GPT
2022-12-27 21:01:50 -08:00
test_gpt_parallel.py
Implement Tensor Parallel for GPT model
2022-12-26 16:22:43 -08:00
test_gpt.py
Implement generation for GPT
2022-12-27 21:01:50 -08:00