Re: [PATCH] mm: gup: fix the fast GUP race against THP collapse

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On 9/5/2022 10:40 PM, David Hildenbrand wrote:
On 05.09.22 16:35, Baolin Wang wrote:


On 9/5/2022 7:11 PM, David Hildenbrand wrote:
On 05.09.22 12:24, David Hildenbrand wrote:
On 05.09.22 12:16, Baolin Wang wrote:


On 9/5/2022 3:59 PM, David Hildenbrand wrote:
On 05.09.22 00:29, John Hubbard wrote:
On 9/1/22 15:27, Yang Shi wrote:
Since general RCU GUP fast was introduced in commit 2667f50e8b81
("mm:
introduce a general RCU get_user_pages_fast()"), a TLB flush is no
longer
sufficient to handle concurrent GUP-fast in all cases, it only
handles
traditional IPI-based GUP-fast correctly.  On architectures that send
an IPI broadcast on TLB flush, it works as expected.  But on the
architectures that do not use IPI to broadcast TLB flush, it may have
the below race:

       CPU A                                          CPU B
THP collapse                                     fast GUP
gup_pmd_range() <--
see valid pmd
gup_pte_range()
<-- work on pte
pmdp_collapse_flush() <-- clear pmd and flush
__collapse_huge_page_isolate()
        check page pinned <-- before GUP bump refcount
                                                          pin the page
                                                          check PTE
<--
no change
__collapse_huge_page_copy()
        copy data to huge page
        ptep_clear()
install huge pmd for the huge page
                                                          return the
stale page
discard the stale page

Hi Yang,

Thanks for taking the trouble to write down these notes. I always
forget which race we are dealing with, and this is a great help. :)

More...


The race could be fixed by checking whether PMD is changed or not
after
taking the page pin in fast GUP, just like what it does for PTE.
If the
PMD is changed it means there may be parallel THP collapse, so GUP
should back off.

Also update the stale comment about serializing against fast GUP in
khugepaged.

Fixes: 2667f50e8b81 ("mm: introduce a general RCU
get_user_pages_fast()")
Signed-off-by: Yang Shi <shy828301@xxxxxxxxx>
---
     mm/gup.c        | 30 ++++++++++++++++++++++++------
     mm/khugepaged.c | 10 ++++++----
     2 files changed, 30 insertions(+), 10 deletions(-)

diff --git a/mm/gup.c b/mm/gup.c
index f3fc1f08d90c..4365b2811269 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -2380,8 +2380,9 @@ static void __maybe_unused undo_dev_pagemap(int
*nr, int nr_start,
     }
     #ifdef CONFIG_ARCH_HAS_PTE_SPECIAL
-static int gup_pte_range(pmd_t pmd, unsigned long addr, unsigned
long end,
-             unsigned int flags, struct page **pages, int *nr)
+static int gup_pte_range(pmd_t pmd, pmd_t *pmdp, unsigned long addr,
+             unsigned long end, unsigned int flags,
+             struct page **pages, int *nr)
     {
         struct dev_pagemap *pgmap = NULL;
         int nr_start = *nr, ret = 0;
@@ -2423,7 +2424,23 @@ static int gup_pte_range(pmd_t pmd, unsigned
long addr, unsigned long end,
                 goto pte_unmap;
             }
-        if (unlikely(pte_val(pte) != pte_val(*ptep))) {
+        /*
+         * THP collapse conceptually does:
+         *   1. Clear and flush PMD
+         *   2. Check the base page refcount
+         *   3. Copy data to huge page
+         *   4. Clear PTE
+         *   5. Discard the base page
+         *
+         * So fast GUP may race with THP collapse then pin and
+         * return an old page since TLB flush is no longer
sufficient
+         * to serialize against fast GUP.
+         *
+         * Check PMD, if it is changed just back off since it
+         * means there may be parallel THP collapse.
+         */

As I mentioned in the other thread, it would be a nice touch to move
such discussion into the comment header.

+        if (unlikely(pmd_val(pmd) != pmd_val(*pmdp)) ||
+            unlikely(pte_val(pte) != pte_val(*ptep))) {


That should be READ_ONCE() for the *pmdp and *ptep reads. Because this
whole lockless house of cards may fall apart if we try reading the
page table values without READ_ONCE().

I came to the conclusion that the implicit memory barrier when grabbing a reference on the page is sufficient such that we don't need READ_ONCE
here.

IMHO the compiler may optimize the code 'pte_val(*ptep)' to be always
get from a register, then we can get an old value if other thread did
set_pte(). I am not sure how the implicit memory barrier can pervent the
compiler optimization? Please correct me if I missed something.

IIUC, an memory barrier always implies a compiler barrier.


To clarify what I mean, Documentation/atomic_t.txt documents

NOTE: when the atomic RmW ops are fully ordered, they should also imply
a compiler barrier.

Right, I agree. That means the complier can not optimize the order of
the 'pte_val(*ptep)', however what I am confusing is that the complier
can still save the value of *ptep into a register or stack instead of
reloading from memory?

After the memory+compiler barrier, the value has to be reloaded. Documentation/memory-barriers.txt documents under "COMPILER BARRIERS":

After some investigation, I realized you are totally right. The GCC Extended Asm manual [1] also says: "To ensure memory contains correct values, GCC may need to flush specific register values to memory before executing the asm. Further, the compiler does not assume that any values read from memory before an asm remain unchanged after that asm; it reloads them as needed. Using the "memory" clobber effectively forms a read/write memory barrier for the compiler."

So as you said, the value will be reloaded after the memory+compiler barrier. Thanks for your explanation.

[1] https://gcc.gnu.org/onlinedocs/gcc/Extended-Asm.html


"READ_ONCE() and WRITE_ONCE() can be thought of as weak forms of barrier() that affect only the specific accesses flagged by the READ_ONCE() or WRITE_ONCE()."

Consequently, if there already is a compile barrier, additional READ_ONCE/WRITE_ONCE isn't required.


A similar issue in commit d6c1f098f2a7 ("mm/swap_state: fix a data race
in swapin_nr_pages").

--- a/mm/swap_state.c
+++ b/mm/swap_state.c
@@ -509,10 +509,11 @@ static unsigned long swapin_nr_pages(unsigned long
offset)
                  return 1;

          hits = atomic_xchg(&swapin_readahead_hits, 0);
-       pages = __swapin_nr_pages(prev_offset, offset, hits, max_pages,
+       pages = __swapin_nr_pages(READ_ONCE(prev_offset), offset, hits,
+                                 max_pages,
                                    atomic_read(&last_readahead_pages));
          if (!hits)
-               prev_offset = offset;
+               WRITE_ONCE(prev_offset, offset);
          atomic_set(&last_readahead_pages, pages);

          return pages;


IIUC the difference here is that there is not other implicit memory+compile barrier in between.

Yes, I see the difference.




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux