flash-attention/flash_attn
dan_the_3rd c9d4a816fa
Support LLaMa2 and CodeLLaMa (#491)
Co-authored-by: danthe3rd <danthe3rd>
2023-08-30 10:31:14 -07:00
..
layers Run isort and black on python files 2023-08-18 14:22:11 -07:00
losses Run isort and black on python files 2023-08-18 14:22:11 -07:00
models Support LLaMa2 and CodeLLaMa (#491) 2023-08-30 10:31:14 -07:00
modules Support MQA + MP for decoding (#490) 2023-08-30 10:29:54 -07:00
ops [bugfix] handle_x not define when using checkpoint_lvl = 2 (#502) 2023-08-29 23:46:10 -07:00
utils [Gen] Minor fix to modify logits for top_p 2023-08-29 14:29:06 -07:00
__init__.py Update Cutlass to v3.2.0 2023-08-27 23:47:28 -07:00
bert_padding.py add unpad_input_for_concatenated_sequences (#499) 2023-08-29 02:23:56 -07:00
flash_attn_interface.py Change causal mask to be aligned to bottom-right instead of top-left 2023-08-24 23:41:07 -07:00
flash_attn_triton_og.py Run isort and black on python files 2023-08-18 14:22:11 -07:00
flash_attn_triton.py Run isort and black on python files 2023-08-18 14:22:11 -07:00
flash_blocksparse_attention.py Run isort and black on python files 2023-08-18 14:22:11 -07:00
flash_blocksparse_attn_interface.py Run isort and black on python files 2023-08-18 14:22:11 -07:00
fused_softmax.py Run isort and black on python files 2023-08-18 14:22:11 -07:00
pyproject.toml Move pyproject.toml to flash-attn and tests dir to avoid PEP 517 2023-08-25 15:05:28 -07:00