From b784ed73cf89cc1c820d18a4bf6f232f91004b47 Mon Sep 17 00:00:00 2001 From: Tri Dao Date: Fri, 25 Nov 2022 10:49:17 -0800 Subject: [PATCH] [Docs] Clarify OpenFold speedup --- usage.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/usage.md b/usage.md index f2688cc..b34e34c 100644 --- a/usage.md +++ b/usage.md @@ -89,7 +89,8 @@ yields the fastest BERT training on cloud instances in MLPerf training 2.0 (June memory-efficient, and GPU-friendly PyTorch reproduction of AlphaFold 2. With FlashAttention as one of its [components](https://twitter.com/gahdritz/status/1595420944880779266), it is - up to 3x faster than AlphaFold2, and can predict 2x longer structures. + up to 3x faster than AlphaFold2 to run inference on short sequences, and can + predict 2x longer structures. ## Different implementations