Re: [PATCH v1] mm: hugetlb: fix hugepage memory leak caused by wrong reserve count

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Nov 20, 2015 at 02:26:38PM -0800, Andrew Morton wrote:
> On Fri, 20 Nov 2015 15:57:21 +0800 "Hillf Danton" <hillf.zj@xxxxxxxxxxxxxxx> wrote:
> 
> > > 
> > > When dequeue_huge_page_vma() in alloc_huge_page() fails, we fall back to
> > > alloc_buddy_huge_page() to directly create a hugepage from the buddy allocator.
> > > In that case, however, if alloc_buddy_huge_page() succeeds we don't decrement
> > > h->resv_huge_pages, which means that successful hugetlb_fault() returns without
> > > releasing the reserve count. As a result, subsequent hugetlb_fault() might fail
> > > despite that there are still free hugepages.
> > > 
> > > This patch simply adds decrementing code on that code path.
> > > 
> > > I reproduced this problem when testing v4.3 kernel in the following situation:
> > > - the test machine/VM is a NUMA system,
> > > - hugepage overcommiting is enabled,
> > > - most of hugepages are allocated and there's only one free hugepage
> > >   which is on node 0 (for example),
> > > - another program, which calls set_mempolicy(MPOL_BIND) to bind itself to
> > >   node 1, tries to allocate a hugepage,
> > > - the allocation should fail but the reserve count is still hold.
> > > 
> > > Signed-off-by: Naoya Horiguchi <n-horiguchi@xxxxxxxxxxxxx>
> > > Cc: <stable@xxxxxxxxxxxxxxx> [3.16+]
> > > ---
> > > - the reason why I set stable target to "3.16+" is that this patch can be
> > >   applied easily/automatically on these versions. But this bug seems to be
> > >   old one, so if you are interested in backporting to older kernels,
> > >   please let me know.
> > > ---
> > >  mm/hugetlb.c |    5 ++++-
> > >  1 files changed, 4 insertions(+), 1 deletions(-)
> > > 
> > > diff --git v4.3/mm/hugetlb.c v4.3_patched/mm/hugetlb.c
> > > index 9cc7734..77c518c 100644
> > > --- v4.3/mm/hugetlb.c
> > > +++ v4.3_patched/mm/hugetlb.c
> > > @@ -1790,7 +1790,10 @@ struct page *alloc_huge_page(struct vm_area_struct *vma,
> > >  		page = alloc_buddy_huge_page(h, NUMA_NO_NODE);
> > >  		if (!page)
> > >  			goto out_uncharge_cgroup;
> > > -
> > > +		if (!avoid_reserve && vma_has_reserves(vma, gbl_chg)) {
> > > +			SetPagePrivate(page);
> > > +			h->resv_huge_pages--;
> > > +		}
> > 
> > I am wondering if this patch was prepared against the next tree.
> 
> It's against 4.3.

Hi Hillf, Andrew,

That's right, this was against 4.3, and I agree with the adjustment
for next as done below.

> Here's the version I have, against current -linus:
> 
> --- a/mm/hugetlb.c~mm-hugetlb-fix-hugepage-memory-leak-caused-by-wrong-reserve-count
> +++ a/mm/hugetlb.c
> @@ -1886,7 +1886,10 @@ struct page *alloc_huge_page(struct vm_a
>  		page = __alloc_buddy_huge_page_with_mpol(h, vma, addr);
>  		if (!page)
>  			goto out_uncharge_cgroup;
> -
> +		if (!avoid_reserve && vma_has_reserves(vma, gbl_chg)) {
> +			SetPagePrivate(page);
> +			h->resv_huge_pages--;
> +		}
>  		spin_lock(&hugetlb_lock);
>  		list_move(&page->lru, &h->hugepage_activelist);
>  		/* Fall through */
> 
> It needs a careful re-review and, preferably, retest please.

I retested and made sure that the fix works on next-20151123.

Thanks,
Naoya Horiguchi

> Probably when Greg comes to merge this he'll hit problems and we'll
> need to provide him with the against-4.3 patch.
> 
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@xxxxxxxxx.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href



[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [ECOS]     [Asterisk Internet PBX]     [Linux API]