On 2017/5/10 14:51, Vlastimil Babka wrote: > On 05/09/2017 03:54 PM, zhong jiang wrote: >> Hi, Vlastimil >> >> I review the code again. it works well for NUMA. because >> khugepaged_prealloc_page will put_page when *hpage is true. >> >> the memory leak will still exist in !NUMA. because it ingore >> the put_page. is it right? I miss something. > No, on !NUMA the preallocated and unused new_page is freed by put_page() > at the very end of khugepaged_do_scan(). Thank you for clarification. I should consider more before sending the patch. Thanks zhongjiang >> Thanks >> zhongjiang >> >> On 2017/5/9 20:41, Vlastimil Babka wrote: >>> On 05/09/2017 02:20 PM, zhong jiang wrote: >>>> On 2017/5/9 19:34, Vlastimil Babka wrote: >>>>> On 05/09/2017 12:55 PM, zhongjiang wrote: >>>>>> From: zhong jiang <zhongjiang@xxxxxxxxxx> >>>>>> >>>>>> Current, when we prepare a huge page to collapse, due to some >>>>>> reasons, it can fail to collapse. At the moment, we should >>>>>> release the preallocate huge page. >>>>>> >>>>>> Signed-off-by: zhong jiang <zhongjiang@xxxxxxxxxx> >>>>> Hmm, scratch that, there's no memory leak. The pointer to new_page is >>>>> stored in *hpage, and put_page() is called all the way up in >>>>> khugepaged_do_scan(). >>>> I see. I miss it. but why the new_page need to be release all the way. >>> AFAIK to support preallocation and reusal of preallocated page for >>> collapse attempt in different pmd. It only works for !NUMA so it's >>> likely not worth all the trouble and complicated code, so I wouldn't be >>> opposed to simplifying this. >>> >>>> I do not see the count increment when scan success. it save the memory, >>>> only when page fault happen. >>> I don't understand what you mean here? >>> >>>> Thanks >>>> zhongjiang >>>>>> --- >>>>>> mm/khugepaged.c | 4 ++++ >>>>>> 1 file changed, 4 insertions(+) >>>>>> >>>>>> diff --git a/mm/khugepaged.c b/mm/khugepaged.c >>>>>> index 7cb9c88..586b1f1 100644 >>>>>> --- a/mm/khugepaged.c >>>>>> +++ b/mm/khugepaged.c >>>>>> @@ -1082,6 +1082,8 @@ static void collapse_huge_page(struct mm_struct *mm, >>>>>> up_write(&mm->mmap_sem); >>>>>> out_nolock: >>>>>> trace_mm_collapse_huge_page(mm, isolated, result); >>>>>> + if (page != NULL && result != SCAN_SUCCEED) >>>>>> + put_page(new_page); >>>>>> return; >>>>>> out: >>>>>> mem_cgroup_cancel_charge(new_page, memcg, true); >>>>>> @@ -1555,6 +1557,8 @@ static void collapse_shmem(struct mm_struct *mm, >>>>>> } >>>>>> out: >>>>>> VM_BUG_ON(!list_empty(&pagelist)); >>>>>> + if (page != NULL && result != SCAN_SUCCEED) >>>>>> + put_page(new_page); >>>>>> /* TODO: tracepoints */ >>>>>> } >>>>>> >>>>>> >>>>> . >>>>> >>> . >>> >> > > . > -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>