From: chenqiwu <chenqiwu@xxxxxxxxxx> As discussed in patch[1], there is an imbalnace of normal page refcount between copy_one_pte() and zap_pte_range(). This patch put the refcount of normal page back in zap_pte_range() to fix the imbalance. [1] https://patchwork.kernel.org/patch/11494691/ Signed-off-by: chenqiwu <chenqiwu@xxxxxxxxxx> --- mm/memory.c | 1 + 1 file changed, 1 insertion(+) diff --git a/mm/memory.c b/mm/memory.c index 2143827..ec8de9a 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -1088,6 +1088,7 @@ static unsigned long zap_pte_range(struct mmu_gather *tlb, } rss[mm_counter(page)]--; page_remove_rmap(page, false); + put_page(page); if (unlikely(page_mapcount(page) < 0)) print_bad_pte(vma, addr, ptent, page); if (unlikely(__tlb_remove_page(tlb, page))) { -- 1.9.1