flash-attention/flash_attn/ops
2023-03-31 14:23:45 -07:00
..
triton Add GPT and ViT models 2022-11-13 22:30:23 -08:00
__init__.py Add __init__.py files to subdirectories for installation 2022-11-17 16:55:44 -08:00
fused_dense.py Support H100 for other CUDA extensions 2023-03-15 16:59:27 -07:00
gelu_activation.py Add fused_dense and dropout_add_layernorm CUDA extensions 2022-11-13 21:59:20 -08:00
layer_norm.py [LayerNorm] Implement LN with parallel residual, support dim 8k 2023-03-31 14:23:45 -07:00
rms_norm.py [LayerNorm] Implement LN with parallel residual, support dim 8k 2023-03-31 14:23:45 -07:00