[Docs] Mention PubMedGPT

This commit is contained in:
Tri Dao 2022-12-15 19:44:59 -08:00
parent 04c4c6106e
commit ece8f05d09

View File

@ -45,6 +45,11 @@ yields the fastest BERT training on cloud instances in MLPerf training 2.0 (June
## Language model training & inference
- [PubMedGPT 2.7B](https://crfm.stanford.edu/2022/12/15/pubmedgpt.html), a
domain-specific LLM for biomedicine, by Stanford CRFM, trained on
[MosaicML](https://www.mosaicml.com/blog/introducing-pubmed-gpt) Cloud. Just
using FlashAttention nearly halves the total training time.
- Meta's
[AITemplate](https://ai.facebook.com/blog/gpu-inference-engine-nvidia-amd-open-source/)
uses FlashAttention as part of their approach to speed up Transformer