On 9/5/22 00:59, David Hildenbrand wrote: ... >>> diff --git a/mm/gup.c b/mm/gup.c >>> index f3fc1f08d90c..4365b2811269 100644 >>> --- a/mm/gup.c >>> +++ b/mm/gup.c >>> @@ -2380,8 +2380,9 @@ static void __maybe_unused undo_dev_pagemap(int *nr, int nr_start, >>> } >>> >>> #ifdef CONFIG_ARCH_HAS_PTE_SPECIAL >>> -static int gup_pte_range(pmd_t pmd, unsigned long addr, unsigned long end, >>> - unsigned int flags, struct page **pages, int *nr) >>> +static int gup_pte_range(pmd_t pmd, pmd_t *pmdp, unsigned long addr, >>> + unsigned long end, unsigned int flags, >>> + struct page **pages, int *nr) >>> { >>> struct dev_pagemap *pgmap = NULL; >>> int nr_start = *nr, ret = 0; >>> @@ -2423,7 +2424,23 @@ static int gup_pte_range(pmd_t pmd, unsigned long addr, unsigned long end, >>> goto pte_unmap; >>> } >>> >>> - if (unlikely(pte_val(pte) != pte_val(*ptep))) { >>> + /* >>> + * THP collapse conceptually does: >>> + * 1. Clear and flush PMD >>> + * 2. Check the base page refcount >>> + * 3. Copy data to huge page >>> + * 4. Clear PTE >>> + * 5. Discard the base page >>> + * >>> + * So fast GUP may race with THP collapse then pin and >>> + * return an old page since TLB flush is no longer sufficient >>> + * to serialize against fast GUP. >>> + * >>> + * Check PMD, if it is changed just back off since it >>> + * means there may be parallel THP collapse. >>> + */ >> >> As I mentioned in the other thread, it would be a nice touch to move >> such discussion into the comment header. >> >>> + if (unlikely(pmd_val(pmd) != pmd_val(*pmdp)) || >>> + unlikely(pte_val(pte) != pte_val(*ptep))) { >> >> >> That should be READ_ONCE() for the *pmdp and *ptep reads. Because this >> whole lockless house of cards may fall apart if we try reading the >> page table values without READ_ONCE(). > > I came to the conclusion that the implicit memory barrier when grabbing > a reference on the page is sufficient such that we don't need READ_ONCE > here. OK, I believe you're referring to this: folio = try_grab_folio(page, 1, flags); just earlier in gup_pte_range(). Yes that's true...but it's hidden, which is unfortunate. Maybe a comment could help. > > If we still intend to change that code, we should fixup all GUP-fast > functions in a similar way. But again, I don't think we need a change here. > It's really rough, having to play this hide-and-seek game of "who did the memory barrier". And I'm tempted to suggest adding READ_ONCE() to any and all reads of the page table entries, just to help stay out of trouble. It's a visual reminder that page table reads are always a lockless read and are inherently volatile. Of course, I realize that adding extra READ_ONCE() calls is not a good thing. It might be a performance hit, although, again, these are volatile reads by nature, so you probably had a membar anyway. And looking in reverse, there are actually a number of places here where we could probably get away with *removing* READ_ONCE()! Overall, I would be inclined to load up on READ_ONCE() calls, yes. But I sort of expect to be overridden on that, due to potential performance concerns, and that's reasonable. At a minimum we should add a few short comments about what memory barriers are used, and why we don't need a READ_ONCE() or something stronger when reading the pte. > >>> - * After this gup_fast can't run anymore. This also removes >>> - * any huge TLB entry from the CPU so we won't allow >>> - * huge and small TLB entries for the same virtual address >>> - * to avoid the risk of CPU bugs in that area. >>> + * This removes any huge TLB entry from the CPU so we won't allow >>> + * huge and small TLB entries for the same virtual address to >>> + * avoid the risk of CPU bugs in that area. >>> + * >>> + * Parallel fast GUP is fine since fast GUP will back off when >>> + * it detects PMD is changed. >>> */ >>> _pmd = pmdp_collapse_flush(vma, address, pmd); >> >> To follow up on David Hildenbrand's note about this in the nearby thread... >> I'm also not sure if pmdp_collapse_flush() implies a memory barrier on >> all arches. It definitely does do an atomic op with a return value on x86, >> but that's just one arch. >> > > I think a ptep/pmdp clear + TLB flush really has to imply a memory > barrier, otherwise TLB flushing code might easily mess up with > surrounding code. But we should better double-check. Let's document the function as such, once it's verified: "This is a guaranteed memory barrier". > > s390x executes an IDTE instruction, which performs serialization (-> > memory barrier). arm64 seems to use DSB instructions to enforce memory > ordering. > thanks, -- John Hubbard NVIDIA