flash-attention/flash_attn
Kirthi Shankar Sivamani a03f6f8e9e
Enable CUDA graphs (#386)
* Add RNG state to kernel launch params

Signed-off-by: Kirthi Shankar Sivamani <ksivamani@nvidia.com>

* Save seed and offset for backward

Signed-off-by: Kirthi Shankar Sivamani <ksivamani@nvidia.com>

* Single thread write to global mem

Signed-off-by: Kirthi Shankar Sivamani <ksivamani@nvidia.com>

* compute_dq_dk_dv_1colblock get seed and offset from launch params

Signed-off-by: Kirthi Shankar Sivamani <ksivamani@nvidia.com>

* compute_dq_dk_dv_1rowblock get seed and offset from launch params

Signed-off-by: Kirthi Shankar Sivamani <ksivamani@nvidia.com>

* Change forward c++ APIs to save RNG state for backward

Signed-off-by: Kirthi Shankar Sivamani <ksivamani@nvidia.com>

* Change backward c++ APIs to set RNG state for bprop launcher

Signed-off-by: Kirthi Shankar Sivamani <ksivamani@nvidia.com>

* Bug fixes

Signed-off-by: Kirthi Shankar Sivamani <ksivamani@nvidia.com>

* Python side API changes

Signed-off-by: Kirthi Shankar Sivamani <ksivamani@nvidia.com>

* Bug fix; only save seeds instead of full offset

Signed-off-by: Kirthi Shankar Sivamani <ksivamani@nvidia.com>

* Account for 3D grid size

Signed-off-by: Kirthi Shankar Sivamani <ksivamani@nvidia.com>

---------

Signed-off-by: Kirthi Shankar Sivamani <ksivamani@nvidia.com>
2023-07-27 16:11:34 -07:00
..
layers [MHA] Implement MQA/GQA 2023-07-23 00:06:58 -07:00
losses Tweak CrossEntropyLoss to take process_group in init 2022-12-27 10:47:43 -08:00
models Implement ParallelGatedMlp (#251) 2023-07-26 12:14:15 -07:00
modules [MLP] Edit ParallelGatedMlp 2023-07-26 09:39:37 -10:00
ops [LayerNorm] Make sure memory addresses are aligned to 16 bytes 2023-07-04 14:53:12 -07:00
utils [Gen] Minor tweak to allocate_inference_cache 2023-04-21 11:56:47 -07:00
__init__.py Bump to v2.0.1 2023-07-23 12:33:42 -10:00
bert_padding.py remove numpy dependency 2022-10-06 19:17:15 +02:00
flash_attn_interface.py Enable CUDA graphs (#386) 2023-07-27 16:11:34 -07:00
flash_attn_triton_og.py Implement FlashAttention in Triton 2022-10-30 18:09:11 -07:00
flash_attn_triton.py [Triton] Fix benchmark_causal, mention Triton version 2023-03-22 00:51:16 -07:00
flash_blocksparse_attention.py Rename src -> flash_attn 2022-06-01 18:50:26 -07:00
flash_blocksparse_attn_interface.py Rename src -> flash_attn 2022-06-01 18:50:26 -07:00
fused_softmax.py Add Megatron attention implementation for benchmarking 2022-10-23 23:04:16 -07:00