This website requires JavaScript.
Explore
Help
Register
Sign In
squall
/
flash-attention
Watch
1
Star
0
Fork
0
You've already forked flash-attention
Code
Issues
Pull Requests
Actions
Packages
Projects
Releases
Wiki
Activity
e45a46a5b7
flash-attention
/
csrc
History
Tri Dao
e45a46a5b7
[Rotary] Implement GPT-J style (interleaved) rotary
2023-03-14 14:35:53 -07:00
..
flash_attn
[FA] Remove unused variable rng_engine_inputs
2023-01-25 15:32:40 -08:00
ft_attention
[Gen] Pass qkv_stride to ft_attention kernel for batched generation
2023-01-15 15:20:01 -08:00
fused_dense_lib
[FusedDense] Support relu, rename FusedDenseGeluDense -> FusedMLP
2023-01-17 18:12:27 -08:00
fused_softmax
Add Megatron attention implementation for benchmarking
2022-10-23 23:04:16 -07:00
layer_norm
[LayerNorm] Rename x1 -> residual
2023-01-19 13:07:27 -08:00
rotary
[Rotary] Implement GPT-J style (interleaved) rotary
2023-03-14 14:35:53 -07:00
xentropy
Add smoothing for CrossEntropyParallel, rename to CrossEntropyLoss
2022-12-23 14:51:08 -08:00