Re: [PATCH 15/33] userfaultfd: hugetlbfs: add __mcopy_atomic_hugetlb for huge page UFFDIO_COPY

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Nov 23, 2016 at 02:38:37PM +0800, Hillf Danton wrote:
> On Tuesday, November 22, 2016 9:17 AM Mike Kravetz wrote:
> > I am not sure if you are convinced ClearPagePrivate is an acceptable
> > solution to this issue.  If you do, here is the simple patch to add
> > it and an appropriate comment.
> > 
> Hi Mike and Andrea
> 
> Sorry for my jumping in.
> 
> In commit 07443a85ad
> ("mm, hugetlb: return a reserved page to a reserved pool if failed")
> newly allocated huge page gets cleared for a successful COW.
> 
> I'm wondering if we can handle our error path along that way?
> 
> Obvious I could miss the points you are concerning.

The hugepage allocation toggles the region covering the page in the
vma reservations, so when the vma is virtually unmapped, those regions
that got toggled, are considered not reserved and the global
reservation is not decreased.

Because the global reservation is decreased by the same page
allocation that sets the page private flag after toggling the virtual
regions, the page private flag shall be cleared when the page is
finally mapped in userland, as it's not reserved anymore. This way
when the page is freed, the global reservation will not be increased
(and when the vma is unmapped the reservation will not be decreased
either, because of the region toggling above).

hugetlb_mcopy_atomic_pte is already correctly doing:

	ClearPagePrivate(page);
	hugepage_add_new_anon_rmap(page, dst_vma, dst_addr);

while mapping the hugepage in userland.

The issue is that if we can't reach hugetlb_mcopy_atomic_pte because
userland screws with the vmas while the UFFDIO_COPY releases the
mmap_sem, the point where we error out, has the vma out of sync
because we had to drop the mmap_sem in the first place. So we can't
toggle the vma virtual region covering the page back to its original
state (i.e. reserved). That's what restore_reserve_on_error would try
to achieve, but we can't run it as the vma we got in the error path is
stale.

All we know is that one more page will be considered not reserved when
the vma is unmapped, so the global reservation will be decreased of
one less page when the vma is unmapped. In turn when freeing such
hugepage in the error path, we've to prevent the global reserve to be
increased once again and to do so we've to clear the page private flag
before freeing the hugepage.

I already applied Mark's patch that clears the page private flag in
the error path. If anything is incorrect in the explanation above let
me know.

Thanks,
Andrea

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@xxxxxxxxx.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@xxxxxxxxx";> email@xxxxxxxxx </a>



[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]