Re: [PATCH v1 1/3] mm/gup: consistently name GUP-fast functions

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 26.04.24 18:12, Peter Xu wrote:
On Fri, Apr 26, 2024 at 09:44:58AM -0400, Peter Xu wrote:
On Fri, Apr 26, 2024 at 09:17:47AM +0200, David Hildenbrand wrote:
On 02.04.24 14:55, David Hildenbrand wrote:
Let's consistently call the "fast-only" part of GUP "GUP-fast" and rename
all relevant internal functions to start with "gup_fast", to make it
clearer that this is not ordinary GUP. The current mixture of
"lockless", "gup" and "gup_fast" is confusing.

Further, avoid the term "huge" when talking about a "leaf" -- for
example, we nowadays check pmd_leaf() because pmd_huge() is gone. For the
"hugepd"/"hugepte" stuff, it's part of the name ("is_hugepd"), so that
stays.

What remains is the "external" interface:
* get_user_pages_fast_only()
* get_user_pages_fast()
* pin_user_pages_fast()

The high-level internal functions for GUP-fast (+slow fallback) are now:
* internal_get_user_pages_fast() -> gup_fast_fallback()
* lockless_pages_from_mm() -> gup_fast()

The basic GUP-fast walker functions:
* gup_pgd_range() -> gup_fast_pgd_range()
* gup_p4d_range() -> gup_fast_p4d_range()
* gup_pud_range() -> gup_fast_pud_range()
* gup_pmd_range() -> gup_fast_pmd_range()
* gup_pte_range() -> gup_fast_pte_range()
* gup_huge_pgd()  -> gup_fast_pgd_leaf()
* gup_huge_pud()  -> gup_fast_pud_leaf()
* gup_huge_pmd()  -> gup_fast_pmd_leaf()

The weird hugepd stuff:
* gup_huge_pd() -> gup_fast_hugepd()
* gup_hugepte() -> gup_fast_hugepte()

I just realized that we end up calling these from follow_hugepd() as well.
And something seems to be off, because gup_fast_hugepd() won't have the VMA
even in the slow-GUP case to pass it to gup_must_unshare().

So these are GUP-fast functions and the terminology seem correct. But the
usage from follow_hugepd() is questionable,

commit a12083d721d703f985f4403d6b333cc449f838f6
Author: Peter Xu <peterx@xxxxxxxxxx>
Date:   Wed Mar 27 11:23:31 2024 -0400

     mm/gup: handle hugepd for follow_page()


states "With previous refactors on fast-gup gup_huge_pd(), most of the code
can be leveraged", which doesn't look quite true just staring the the
gup_must_unshare() call where we don't pass the VMA. Also,
"unlikely(pte_val(pte) != pte_val(ptep_get(ptep)" doesn't make any sense for
slow GUP ...

Yes it's not needed, just doesn't look worthwhile to put another helper on
top just for this.  I mentioned this in the commit message here:

   There's something not needed for follow page, for example, gup_hugepte()
   tries to detect pgtable entry change which will never happen with slow
   gup (which has the pgtable lock held), but that's not a problem to check.


@Peter, any insights?

However I think we should pass vma in for sure, I guess I overlooked that,
and it didn't expose in my tests too as I probably missed ./cow.

I'll prepare a separate patch on top of this series and the gup-fast rename
patches (I saw this one just reached mm-stable), and I'll see whether I can
test it too if I can find a Power system fast enough.  I'll probably drop
the "fast" in the hugepd function names too.


For the missing VMA parameter, the cow.c test might not trigger it. We never need the VMA to make
a pinning decision for anonymous memory. We'll trigger an unsharing fault, get an exclusive anonymous page
and can continue.

We need the VMA in gup_must_unshare(), when long-term pinning a file hugetlb page. I *think*
the gup_longterm.c selftest should trigger that, especially:

# [RUN] R/O longterm GUP-fast pin in MAP_SHARED file mapping ... with memfd hugetlb (2048 kB)
...
# [RUN] R/O longterm GUP-fast pin in MAP_SHARED file mapping ... with memfd hugetlb (1048576 kB)


We need a MAP_SHARED page where the PTE is R/O that we want to long-term pin R/O.
I don't remember from the top of my head if the test here might have a R/W-mapped
folio. If so, we could extend it to cover that.

Hmm, so when I enable 2M hugetlb I found ./cow is even failing on x86.

   # ./cow  | grep -B1 "not ok"
   # [RUN] vmsplice() + unmap in child ... with hugetlb (2048 kB)
   not ok 161 No leak from parent into child
   --
   # [RUN] vmsplice() + unmap in child with mprotect() optimization ... with hugetlb (2048 kB)
   not ok 215 No leak from parent into child
   --
   # [RUN] vmsplice() before fork(), unmap in parent after fork() ... with hugetlb (2048 kB)
   not ok 269 No leak from child into parent
   --
   # [RUN] vmsplice() + unmap in parent after fork() ... with hugetlb (2048 kB)
   not ok 323 No leak from child into parent

And it looks like it was always failing.. perhaps since the start?  We

Yes!

commit 7dad331be7816103eba8c12caeb88fbd3599c0b9
Author: David Hildenbrand <david@xxxxxxxxxx>
Date:   Tue Sep 27 13:01:17 2022 +0200

    selftests/vm: anon_cow: hugetlb tests
Let's run all existing test cases with all hugetlb sizes we're able to
    detect.
Note that some tests cases still fail. This will, for example, be fixed
    once vmsplice properly uses FOLL_PIN instead of FOLL_GET for pinning.
    With 2 MiB and 1 GiB hugetlb on x86_64, the expected failures are:
# [RUN] vmsplice() + unmap in child ... with hugetlb (2048 kB)
     not ok 23 No leak from parent into child
     # [RUN] vmsplice() + unmap in child ... with hugetlb (1048576 kB)
     not ok 24 No leak from parent into child
     # [RUN] vmsplice() before fork(), unmap in parent after fork() ... with hugetlb (2048 kB)
     not ok 35 No leak from child into parent
     # [RUN] vmsplice() before fork(), unmap in parent after fork() ... with hugetlb (1048576 kB)
     not ok 36 No leak from child into parent
     # [RUN] vmsplice() + unmap in parent after fork() ... with hugetlb (2048 kB)
     not ok 47 No leak from child into parent
     # [RUN] vmsplice() + unmap in parent after fork() ... with hugetlb (1048576 kB)
     not ok 48 No leak from child into parent
As it keeps confusing people (until somebody cares enough to fix vmsplice), I already
thought about just disabling the test and adding a comment why it happens and
why nobody cares.

didn't do the same on hugetlb v.s. normal anon from that regard on the
vmsplice() fix.

I drafted a patch to allow refcount>1 detection as the same, then all tests
pass for me, as below.

David, I'd like to double check with you before I post anything: is that
your intention to do so when working on the R/O pinning or not?

Here certainly the "if it's easy it would already have done" principle applies. :)

The issue is the following: hugetlb pages are scarce resources that cannot usually
be overcommitted. For ordinary memory, we don't care if we COW in some corner case
because there is an unexpected reference. You temporarily consume an additional page
that gets freed as soon as the unexpected reference is dropped.

For hugetlb, it is problematic. Assume you have reserved a single 1 GiB hugetlb page
and your process uses that in a MAP_PRIVATE mapping. Then it calls fork() and the
child quits immediately.

If you decide to COW, you would need a second hugetlb page, which we don't have, so
you have to crash the program.

And in hugetlb it's extremely easy to not get folio_ref_count() == 1:

hugetlb_fault() will do a folio_get(folio) before calling hugetlb_wp()!

... so you essentially always copy.


At that point I walked away from that, letting vmsplice() be fixed at some point. Dave
Howells was close at some point IIRC ...

I had some ideas about retrying until the other reference is gone (which cannot be a
longterm GUP pin), but as vmsplice essentially does without FOLL_PIN|FOLL_LONGTERM,
it's quit hopeless to resolve that as long as vmsplice holds longterm references the wrong
way.

---

One could argue that fork() with hugetlb and MAP_PRIVATE is stupid and fragile: assume
your child MM is torn down deferred, and will unmap the hugetlb page deferred. Or assume
you access the page concurrently with fork(). You'd have to COW and crash the program.
BUT, there is a horribly ugly hack in hugetlb COW code where you *steal* the page form
the child program and crash your child. I'm not making that up, it's horrible.

--
Cheers,

David / dhildenb





[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Kernel Development]     [Kernel Newbies]     [IDE]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite Info]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux ATA RAID]     [Samba]     [Linux Media]     [Device Mapper]

  Powered by Linux