Re: [PATCH 1/4] mm: Trial do_wp_page() simplification

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Sep 15, 2020 at 04:38:38PM -0300, Jason Gunthorpe wrote:
> On Tue, Sep 15, 2020 at 03:13:46PM -0400, Peter Xu wrote:
> > On Tue, Sep 15, 2020 at 03:29:33PM -0300, Jason Gunthorpe wrote:
> > > On Tue, Sep 15, 2020 at 01:05:53PM -0300, Jason Gunthorpe wrote:
> > > > On Tue, Sep 15, 2020 at 10:50:40AM -0400, Peter Xu wrote:
> > > > > On Mon, Sep 14, 2020 at 08:28:51PM -0300, Jason Gunthorpe wrote:
> > > > > > Yes, this stuff does pin_user_pages_fast() and MADV_DONTFORK
> > > > > > together. It sets FOLL_FORCE and FOLL_WRITE to get an exclusive copy
> > > > > > of the page and MADV_DONTFORK was needed to ensure that a future fork
> > > > > > doesn't establish a COW that would break the DMA by moving the
> > > > > > physical page over to the fork. DMA should stay with the process that
> > > > > > called pin_user_pages_fast() (Is MADV_DONTFORK still needed with
> > > > > > recent years work to GUP/etc? It is a pretty terrible ancient thing)
> > > > > 
> > > > > ... Now I'm more confused on what has happened.
> > > > 
> > > > I'm going to try to confirm that the MADV_DONTFORK is actually being
> > > > done by userspace properly, more later.
> > > 
> > > It turns out the test is broken and does not call MADV_DONTFORK when
> > > doing forks - it is an opt-in it didn't do.
> > > 
> > > It looks to me like this patch makes it much more likely that the COW
> > > break after page pinning will end up moving the pinned physical page
> > > to the fork while before it was not very common. Does that make sense?
> > 
> > My understanding is that the fix should not matter much with current failing
> > test case, as long as it's with FOLL_FORCE & FOLL_WRITE.  However what I'm not
> > sure is what if the RDMA/DMA buffers are designed for pure read from userspace.
> 
> No, they are write. Always FOLL_WRITE.
> 
> > E.g. for vfio I'm looking at vaddr_get_pfn() where I believe such pure read
> > buffers will be a GUP with FOLL_PIN and !FOLL_WRITE which will finally pass to
> > pin_user_pages_remote().  So what I'm worrying is something like this:
> 
> I think the !(prot & IOMMU_WRITE) case is probably very rare for
> VFIO. I'm also not sure it will work reliably, in RDMA we had this as
> a more common case and long ago found bugs. The COW had to be broken
> for the pin anyhow.

If I'm not wrong.. QEMU/KVM (assuming there's one vIOMMU in the guest) will try
to do VFIO maps in this read-only way if the IOVA mapped in the guest points to
read only buffers (say, allocated with PCI_DMA_FROMDEVICE).

> 
> >   1. Proc A gets a private anon page X for DMA, mapcount==refcount==1.
> > 
> >   2. Proc A fork()s and gives birth to proc B, page X will now have
> >      mapcount==refcount==2, write-protected.  proc B quits.  Page X goes back
> >      to mapcount==refcount==1 (note! without WRITE bits set in the PTE).
> 
> >   3. pin_user_pages(write=false) for page X.  Since it's with !FORCE & !WRITE,
> >      no COW needed.  Refcount==2 after that.
> > 
> >   4. Pass these pages to device.  We either setup IOMMU page table or just use
> >      the PFNs, which is not important imho - the most important thing is the
> >      device will DMA into page X no matter what.
> > 
> >   5. Some thread of proc A writes to page X, trigger COW since it's
> >      write-protected with mapcount==1 && refcount==2.  The HVA that pointing to
> >      page X will be changed to point to another page Y after the COW.
> > 
> >   6. Device DMA happens, data resides on X.  Proc A can never get the data,
> >      though, because it's looking at page Y now.
> 
> RDMA doesn't ever use !WRITE
> 
> I'm guessing #5 is the issue, just with a different ordering. If the
> #3 pin_user_pages() preceeds the #2 fork, don't we get to the same #5?

Right, but only if without MADV_DONTFORK?  When without MADV_DONTFORK I'll
probably still see that as an userspace bug instead of a kernel one when the
userspace decided to fork() after step #3.

> 
> > If this is a problem, we may still need the fix patch (maybe not as urgent as
> > before at least).  But I'd like to double confirm, just in case I miss some
> > obvious facts above.
> 
> I'm worred that the sudden need to have MAD_DONTFORK is going to be a
> turn into a huge regression. It already blew up our first level of
> synthetic test cases. I'm worried what we will see when the
> application suite is run in a few months :\

For my own preference I'll consider changing kernel behavior if the impact is
still under control (the performance report of 30%+ boost is also attractive
after the simplify-cow patch).  The other way is to maintain the old reuse
logic forever, that'll be another kind of burden.  Seems no easy way on either
side...

> 
> > > Given that the tests are wrong it seems like broken userspace,
> > > however, it also worked reliably for a fairly long time.
> > 
> > IMHO it worked because the page to do RDMA has mapcount==1, so it was reused
> > previously just as-is even after the fork without MADV_DONTFORK and after the
> > child quits.
> 
> That would match the results we see.. So this patch changes things so
> it is not re-used as-is, but replaced with Y?

Yes. The patch lets "replaced with Y" (cow) happen earlier at step #3.  Then
with MADV_DONTFORK, reuse should not happen any more.

Thanks,

-- 
Peter Xu





[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux