| .. |
|
layers
|
[Gen] Use flash_attn_with_kvcache in generation
|
2023-09-07 08:24:43 -07:00 |
|
losses
|
[CE] Implement CrossEntropyLoss in Triton
|
2023-09-15 20:05:28 -07:00 |
|
models
|
Add tests for Pythia, GPT-JT, and RedPajama models
|
2023-09-13 01:10:39 -07:00 |
|
modules
|
[Gen] Don't use ft_attention, use flash_attn_with_kvcache instead
|
2023-09-18 15:29:06 -07:00 |
|
ops
|
[CE] Implement CrossEntropyLoss in Triton
|
2023-09-15 20:05:28 -07:00 |
|
utils
|
[Gen] Don't use ft_attention, use flash_attn_with_kvcache instead
|
2023-09-18 15:29:06 -07:00 |
|
__init__.py
|
Don't compile for Pytorch 2.1 on CUDA 12.1 due to nvcc segfaults
|
2023-09-17 22:15:38 -07:00 |
|
bert_padding.py
|
add unpad_input_for_concatenated_sequences (#499)
|
2023-08-29 02:23:56 -07:00 |
|
flash_attn_interface.py
|
Implement rotary embedding in flash_attn_with_kvcache
|
2023-09-16 01:20:16 -07:00 |
|
flash_attn_triton_og.py
|
Run isort and black on python files
|
2023-08-18 14:22:11 -07:00 |
|
flash_attn_triton.py
|
Run isort and black on python files
|
2023-08-18 14:22:11 -07:00 |
|
flash_blocksparse_attention.py
|
Run isort and black on python files
|
2023-08-18 14:22:11 -07:00 |
|
flash_blocksparse_attn_interface.py
|
Run isort and black on python files
|
2023-08-18 14:22:11 -07:00 |
|
fused_softmax.py
|
Run isort and black on python files
|
2023-08-18 14:22:11 -07:00 |
|
pyproject.toml
|
Move pyproject.toml to flash-attn and tests dir to avoid PEP 517
|
2023-08-25 15:05:28 -07:00 |