Re: [PATCH v1] mm: hugetlb: fix hugepage memory leak caused by wrong reserve count

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 11/19/2015 11:57 PM, Hillf Danton wrote:
>>
>> When dequeue_huge_page_vma() in alloc_huge_page() fails, we fall back to
>> alloc_buddy_huge_page() to directly create a hugepage from the buddy allocator.
>> In that case, however, if alloc_buddy_huge_page() succeeds we don't decrement
>> h->resv_huge_pages, which means that successful hugetlb_fault() returns without
>> releasing the reserve count. As a result, subsequent hugetlb_fault() might fail
>> despite that there are still free hugepages.
>>
>> This patch simply adds decrementing code on that code path.

In general, I agree with the patch.  If we allocate a huge page via the
buddy allocator and that page will be used to satisfy a reservation, then
we need to decrement the reservation count.

As Hillf mentions, this code is not exactly the same in linux-next.
Specifically, there is the new call to take the memory policy of the
vma into account when calling the buddy allocator.  I do not think,
this impacts your proposed change but you may want to test with that
in place.

>>
>> I reproduced this problem when testing v4.3 kernel in the following situation:
>> - the test machine/VM is a NUMA system,
>> - hugepage overcommiting is enabled,
>> - most of hugepages are allocated and there's only one free hugepage
>>   which is on node 0 (for example),
>> - another program, which calls set_mempolicy(MPOL_BIND) to bind itself to
>>   node 1, tries to allocate a hugepage,

I am curious about this scenario.  When this second program attempts to
allocate the page, I assume it creates a reservation first.  Is this
reservation before or after setting mempolicy?  If the mempolicy was set
first, I would have expected the reservation to allocate a page on
node 1 to satisfy the reservation.

-- 
Mike Kravetz

>> - the allocation should fail but the reserve count is still hold.
>>
>> Signed-off-by: Naoya Horiguchi <n-horiguchi@xxxxxxxxxxxxx>
>> Cc: <stable@xxxxxxxxxxxxxxx> [3.16+]
>> ---
>> - the reason why I set stable target to "3.16+" is that this patch can be
>>   applied easily/automatically on these versions. But this bug seems to be
>>   old one, so if you are interested in backporting to older kernels,
>>   please let me know.
>> ---
>>  mm/hugetlb.c |    5 ++++-
>>  1 files changed, 4 insertions(+), 1 deletions(-)
>>
>> diff --git v4.3/mm/hugetlb.c v4.3_patched/mm/hugetlb.c
>> index 9cc7734..77c518c 100644
>> --- v4.3/mm/hugetlb.c
>> +++ v4.3_patched/mm/hugetlb.c
>> @@ -1790,7 +1790,10 @@ struct page *alloc_huge_page(struct vm_area_struct *vma,
>>  		page = alloc_buddy_huge_page(h, NUMA_NO_NODE);
>>  		if (!page)
>>  			goto out_uncharge_cgroup;
>> -
>> +		if (!avoid_reserve && vma_has_reserves(vma, gbl_chg)) {
>> +			SetPagePrivate(page);
>> +			h->resv_huge_pages--;
>> +		}
> 
> I am wondering if this patch was prepared against the next tree.
> 
>>  		spin_lock(&hugetlb_lock);
>>  		list_move(&page->lru, &h->hugepage_activelist);
>>  		/* Fall through */
>> --
>> 1.7.1
> 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@xxxxxxxxx.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@xxxxxxxxx";> email@xxxxxxxxx </a>



[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [ECOS]     [Asterisk Internet PBX]     [Linux API]