From: zhong jiang <zhongjiang@xxxxxxxxxx> huge_pmd_share accounts the number of pmds incorrectly when it races with a parallel pud instantiation. vma_interval_tree_foreach will increase the counter but then has to recheck the pud with the pte lock held and the back off path should drop the increment. The previous code would lead to an elevated pmd count which shouldn't be very harmful (check_mm() might complain and oom_badness() might be marginally confused) but this is worth fixing. Suggested-off-by: Michal Hocko <mhocko@xxxxxxxxxx> Signed-off-by: zhong jiang <zhongjiang@xxxxxxxxxx> --- mm/hugetlb.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 19d0d08..3072857 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -4191,7 +4191,7 @@ pte_t *huge_pmd_share(struct mm_struct *mm, unsigned long addr, pud_t *pud) (pmd_t *)((unsigned long)spte & PAGE_MASK)); } else { put_page(virt_to_page(spte)); - mm_inc_nr_pmds(mm); + mm_dec_nr_pmds(mm); } spin_unlock(ptl); out: -- 1.8.3.1 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>