When khugepaged successfully isolated and copied data from base page to collapsed THP, the base page is about to be freed. So putting the page back to lru sounds not that productive since the page might be isolated by vmscan but it can't be reclaimed by vmscan since it can't be unmapped by try_to_unmap() at all. Actually khugepaged is the last user of this page so it can be freed directly. So, clearing active and unevictable flags, unlocking and dropping refcount from isolate instead of calling putback_lru_page(). Cc: Kirill A. Shutemov <kirill.shutemov@xxxxxxxxxxxxxxx> Cc: Hugh Dickins <hughd@xxxxxxxxxx> Cc: Andrea Arcangeli <aarcange@xxxxxxxxxx> Signed-off-by: Yang Shi <yang.shi@xxxxxxxxxxxxxxxxx> --- mm/khugepaged.c | 15 +++++++++++++-- 1 file changed, 13 insertions(+), 2 deletions(-) diff --git a/mm/khugepaged.c b/mm/khugepaged.c index 0c8d30b..c131a90 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -559,6 +559,17 @@ void __khugepaged_exit(struct mm_struct *mm) static void release_pte_page(struct page *page) { mod_node_page_state(page_pgdat(page), + NR_ISOLATED_ANON + page_is_file_lru(page), -compound_nr(page)); + ClearPageActive(page); + ClearPageUnevictable(page); + unlock_page(page); + /* Drop refcount from isolate */ + put_page(page); +} + +static void release_pte_page_to_lru(struct page *page) +{ + mod_node_page_state(page_pgdat(page), NR_ISOLATED_ANON + page_is_file_lru(page), -compound_nr(page)); unlock_page(page); @@ -576,12 +587,12 @@ static void release_pte_pages(pte_t *pte, pte_t *_pte, page = pte_page(pteval); if (!pte_none(pteval) && !is_zero_pfn(pte_pfn(pteval)) && !PageCompound(page)) - release_pte_page(page); + release_pte_page_to_lru(page); } list_for_each_entry_safe(page, tmp, compound_pagelist, lru) { list_del(&page->lru); - release_pte_page(page); + release_pte_page_to_lru(page); } } -- 1.8.3.1