On Tue, 18 May 2021, Anshuman Khandual wrote: > Although the zero huge page is being shared across various processes, each > mapping needs to update its mm_struct's MM_ANONPAGES stat by HPAGE_PMD_NR > to be consistent. This just updates the stats in set_huge_zero_page() after > the mapping gets created and in zap_huge_pmd() when mapping gets destroyed. > > Cc: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> > Cc: Zi Yan <ziy@xxxxxxxxxx> > Cc: linux-mm@xxxxxxxxx > Cc: linux-kernel@xxxxxxxxxxxxxxx > Signed-off-by: Anshuman Khandual <anshuman.khandual@xxxxxxx> NAK. For consistency with what? In the all the years that the huge zero page has existed, it has been intentionally exempted from rss accounting: consistent with the small zero page, which itself has always been intentionally exempted from rss accounting. In fact, that's a good part of the reason the huge zero page was introduced (see 4a6c1297268c). To change that now will break any users depending on it. Not to mention the BUG: Bad rss-counter state mm:00000000aa61ef82 type:MM_ANONPAGES val:512 I just got from mmotm. Hugh > --- > This applies on v5.13-rc2. > > Changes in V1: > > - Updated MM_ANONPAGES stat in zap_huge_pmd() > - Updated the commit message > > Changes in RFC: > > https://lore.kernel.org/linux-mm/1620890438-9127-1-git-send-email-anshuman.khandual@xxxxxxx/ > > mm/huge_memory.c | 2 ++ > 1 file changed, 2 insertions(+) > > diff --git a/mm/huge_memory.c b/mm/huge_memory.c > index 63ed6b25deaa..306d0a41bf75 100644 > --- a/mm/huge_memory.c > +++ b/mm/huge_memory.c > @@ -706,6 +706,7 @@ static void set_huge_zero_page(pgtable_t pgtable, struct mm_struct *mm, > if (pgtable) > pgtable_trans_huge_deposit(mm, pmd, pgtable); > set_pmd_at(mm, haddr, pmd, entry); > + add_mm_counter(mm, MM_ANONPAGES, HPAGE_PMD_NR); > mm_inc_nr_ptes(mm); > } > > @@ -1678,6 +1679,7 @@ int zap_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma, > tlb_remove_page_size(tlb, pmd_page(orig_pmd), HPAGE_PMD_SIZE); > } else if (is_huge_zero_pmd(orig_pmd)) { > zap_deposited_table(tlb->mm, pmd); > + add_mm_counter(tlb->mm, MM_ANONPAGES, -HPAGE_PMD_NR); > spin_unlock(ptl); > tlb_remove_page_size(tlb, pmd_page(orig_pmd), HPAGE_PMD_SIZE); > } else { > -- > 2.20.1