flash-attention/tests/models
Xuechen Li 0f7853c6a1
enable loading hf llama checkpoints for training (#446)
* prelim.

* add hf convertion fn.

* mlp.

* change name.

* fix bug.

* inverse permute.

* change comment.

* revert style changes.

* fix.

* add doc.

* revert.

* enable load safe.

* fix safe load.

* fix import.

* fix typing-related lints.

* fix ckpt loading logic.

* make single gpu work.

* test with parallel.

* ckpt format.

* enable pretrained state dict.

* remove unused imports.

* remove unused.

* mark idea related.
2023-08-15 08:33:15 -07:00
..
test_bert.py [FusedDense] Support relu, rename FusedDenseGeluDense -> FusedMLP 2023-01-17 18:12:27 -08:00
test_falcon.py [GPT] Implement parallel LLaMa 2023-07-28 15:52:48 -10:00
test_gpt_generation_cg.py [Gen] Fix FT kernel smem size, CG when batch size changed 2023-04-20 17:03:13 -07:00
test_gpt_generation_parallel.py Implement GPT-J 2023-03-22 16:16:58 -07:00
test_gpt_generation.py [MHA] Implement MQA/GQA 2023-07-23 00:06:58 -07:00
test_gpt_neox.py Implement LLaMa 2023-04-18 21:51:35 -07:00
test_gpt_parallel.py [FusedDense] Support relu, rename FusedDenseGeluDense -> FusedMLP 2023-01-17 18:12:27 -08:00
test_gpt.py Implement GPT-J 2023-03-22 16:16:58 -07:00
test_gptj.py [Rotary] Fix tests when loading state dict with rotary inv_freqs 2023-07-26 07:16:33 -10:00
test_llama.py enable loading hf llama checkpoints for training (#446) 2023-08-15 08:33:15 -07:00
test_opt.py Implement GPT-J 2023-03-22 16:16:58 -07:00
test_vit.py [FusedDense] Support relu, rename FusedDenseGeluDense -> FusedMLP 2023-01-17 18:12:27 -08:00