Re: [PATCH v3 3/4] mm: don't expose non-hugetlb page to fast gup prematurely

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 9/27/19 12:31 PM, John Hubbard wrote:
On 9/27/19 5:33 AM, Michal Hocko wrote:
On Thu 26-09-19 20:26:46, John Hubbard wrote:
On 9/26/19 3:20 AM, Kirill A. Shutemov wrote:
BTW, have you looked at other levels of page table hierarchy. Do we have
the same issue for PMD/PUD/... pages?


Along the lines of "what other memory barriers might be missing for
get_user_pages_fast(), I'm also concerned that the synchronization between
get_user_pages_fast() and freeing the page tables might be technically broken,
due to missing memory barriers on the get_user_pages_fast() side. Details:

gup_fast() disables interrupts, but I think it also needs some sort of
memory barrier(s), in order to prevent reads of the page table (gup_pgd_range,
etc) from speculatively happening before the interrupts are disabled.

Could you be more specific about the race scenario please? I thought
that the unmap path will be serialized by the pte lock.


I don't see a pte lock anywhere here.

This case is really pretty far out there, but without run-time memory barriers
I don't think we can completely rule it out:

CPU 0                              CPU 1
--------                           ---------------
                                    get_user_pages_fast()

do_unmap()
  unmap_region()
   free_pgtables()
                                    /*
                                     * speculative reads, re-ordered
                                     * by the CPU at run time, not
                                     * compile time. The function calls
                                     * are not really in this order, but
                                     * the corresponding reads could be.
                                     */
                                    gup_pgd_range()
                                     gup_p4d_range()
                                      gup_pud_range()
                                       gup_pmd_range()
                                        pmd = READ_ONCE(*pmdp)
                                         /* This is stale */

   tlb_finish_mmu()
     tlb_flush_mmu()
      tlb_flush_mmu_tlbonly()
        tlb_flush()
         flush_tlb_mm
          flush_tlb_mm_range
           flush_tlb_others
            native_flush_tlb_others
             smp_call_function_many: IPIs
              ...blocks until CPU1 reenables
                 interrupts

                                         local_irq_disable()
                                          ...continue the page walk based
                                             on stale/freed data...



...and note that I just asked Leonardo Bras (+CC) to include "the" fix[1] for that (I proposed
a snippet that attempts a fix anyway), in his upcoming feature that speeds up PPC THP
operations.

So I hope that doesn't muddy this thread too much, but the other one is awfully quiet
and I was worried that the overlapping changes might go unnoticed.

[1] https://lore.kernel.org/r/4ff1e8e8-929b-9cfc-9bf8-ee88e34de888@xxxxxxxxxx

thanks,
--
John Hubbard
NVIDIA





[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux