Re: [PATCH v2 5/8] mm/gup: Accelerate thp gup even for "pages != NULL"

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 20.06.23 18:23, Peter Xu wrote:
On Tue, Jun 20, 2023 at 05:43:35PM +0200, David Hildenbrand wrote:
On 20.06.23 01:10, Peter Xu wrote:
The acceleration of THP was done with ctx.page_mask, however it'll be
ignored if **pages is non-NULL.

The old optimization was introduced in 2013 in 240aadeedc4a ("mm:
accelerate mm_populate() treatment of THP pages").  It didn't explain why
we can't optimize the **pages non-NULL case.  It's possible that at that
time the major goal was for mm_populate() which should be enough back then.

In the past we had these sub-page refcounts for THP. My best guess (and I
didn't check if that was still the case in 2013) would be that it was
simpler regarding refcount handling to to do it one-subpage at a time.

But I might be just wrong.


Optimize thp for all cases, by properly looping over each subpage, doing
cache flushes, and boost refcounts / pincounts where needed in one go.

This can be verified using gup_test below:

    # chrt -f 1 ./gup_test -m 512 -t -L -n 1024 -r 10

Before:    13992.50 ( +-8.75%)
After:       378.50 (+-69.62%)

Signed-off-by: Peter Xu <peterx@xxxxxxxxxx>
---
   mm/gup.c | 51 ++++++++++++++++++++++++++++++++++++++++++++-------
   1 file changed, 44 insertions(+), 7 deletions(-)

diff --git a/mm/gup.c b/mm/gup.c
index 4a00d609033e..b50272012e49 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -1199,16 +1199,53 @@ static long __get_user_pages(struct mm_struct *mm,
   			goto out;
   		}
   next_page:
-		if (pages) {
-			pages[i] = page;
-			flush_anon_page(vma, page, start);
-			flush_dcache_page(page);
-			ctx.page_mask = 0;
-		}
-
   		page_increm = 1 + (~(start >> PAGE_SHIFT) & ctx.page_mask);
   		if (page_increm > nr_pages)
   			page_increm = nr_pages;
+
+		if (pages) {
+			struct page *subpage;
+			unsigned int j;
+
+			/*
+			 * This must be a large folio (and doesn't need to
+			 * be the whole folio; it can be part of it), do
+			 * the refcount work for all the subpages too.
+			 *
+			 * NOTE: here the page may not be the head page
+			 * e.g. when start addr is not thp-size aligned.
+			 * try_grab_folio() should have taken care of tail
+			 * pages.
+			 */
+			if (page_increm > 1) {
+				struct folio *folio;
+
+				/*
+				 * Since we already hold refcount on the
+				 * large folio, this should never fail.
+				 */
+				folio = try_grab_folio(page, page_increm - 1,
+						       foll_flags);
+				if (WARN_ON_ONCE(!folio)) {
+					/*
+					 * Release the 1st page ref if the
+					 * folio is problematic, fail hard.
+					 */
+					gup_put_folio(page_folio(page), 1,
+						      foll_flags);
+					ret = -EFAULT;
+					goto out;
+				}
+			}
+
+			for (j = 0; j < page_increm; j++) {
+				subpage = nth_page(page, j);
+				pages[i+j] = subpage;

Doe checkpatch like pages[i+j]? I'd have used spaces around the +.

Can do.


+				flush_anon_page(vma, subpage, start + j * PAGE_SIZE);
+				flush_dcache_page(subpage);
+			}
+		}
+
   		i += page_increm;
   		start += page_increm * PAGE_SIZE;
   		nr_pages -= page_increm;


So, we did the first try_grab_folio() while our page was PMD-mapped udner
the PT lock and we had sufficient permissions (e.g., mapped writable, no
unsharing required). With FOLL_PIN, we incremented the pincount.


I was wondering if something could have happened ever since we unlocked the
PT table lock and possibly PTE-mapped the THP. ... but as it's already
pinned, it cannot get shared during fork() [will stay exclusive].

So we can just take additional pins on that folio.


LGTM, although I do like the GUP-fast way of recording+ref'ing it at a
central place (see gup_huge_pmd() with record_subpages() and friends), not
after the effects.

My read on this is follow_page_mask() is also used in follow page, which
does not need page*.

Right ... maybe one day we can do that "better".


No strong opinion here. Maybe we leave this as a follow up even if it can
be justified?  This patch is probably still the smallest (and still clean)
change to speed this whole thing up over either thp or hugetlb.

Sure, we can leave that as a follow-up.


Thinking about why we have the flush_anon_page/flush_dcache_page stuff here and not in GUP-fast ... I suspect that all GUP-fast archs don't need that stuff.

I was wondering if there are some possible races with the flush_anon_page() / flush_dcache_page() on a page that might have been unmapped in the meantime (as we dropped the PT lock ...).

Some flush_dcache_page() implementations do some IMHO confusing page_mapcount() things (like in arch/arc/mm/cache.c). But maybe the unmap code handles that as well ... and most likely these archs don't support THP.

Anyhow, just a note that the flush_anon_page/flush_dcache_page left me confused.

--
Cheers,

David / dhildenb





[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux