The patch titled Subject: mm/rmap: simplify PageAnonExclusive sanity checks when adding anon rmap has been added to the -mm mm-unstable branch. Its filename is mm-rmap-simplify-pageanonexclusive-sanity-checks-when-adding-anon-rmap.patch This patch will shortly appear at https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patches/mm-rmap-simplify-pageanonexclusive-sanity-checks-when-adding-anon-rmap.patch This patch will later appear in the mm-unstable branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next via the mm-everything branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm and is updated there every 2-3 working days ------------------------------------------------------ From: David Hildenbrand <david@xxxxxxxxxx> Subject: mm/rmap: simplify PageAnonExclusive sanity checks when adding anon rmap Date: Wed, 13 Sep 2023 14:51:12 +0200 Let's sanity-check PageAnonExclusive vs. mapcount in page_add_anon_rmap() and hugepage_add_anon_rmap() after setting PageAnonExclusive simply by re-reading the mapcounts. We can stop initializing the "first" variable in page_add_anon_rmap() and no longer need an atomic_inc_and_test() in hugepage_add_anon_rmap(). While at it, switch to VM_WARN_ON_FOLIO(). Link: https://lkml.kernel.org/r/20230913125113.313322-6-david@xxxxxxxxxx Signed-off-by: David Hildenbrand <david@xxxxxxxxxx> Cc: Matthew Wilcox <willy@xxxxxxxxxxxxx> Cc: Mike Kravetz <mike.kravetz@xxxxxxxxxx> Cc: Muchun Song <muchun.song@xxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/rmap.c | 14 ++++++-------- 1 file changed, 6 insertions(+), 8 deletions(-) --- a/mm/rmap.c~mm-rmap-simplify-pageanonexclusive-sanity-checks-when-adding-anon-rmap +++ a/mm/rmap.c @@ -1199,7 +1199,7 @@ void page_add_anon_rmap(struct page *pag atomic_t *mapped = &folio->_nr_pages_mapped; int nr = 0, nr_pmdmapped = 0; bool compound = flags & RMAP_COMPOUND; - bool first = true; + bool first; /* Is page being mapped by PTE? Is this its first map to be added? */ if (likely(!compound)) { @@ -1228,9 +1228,6 @@ void page_add_anon_rmap(struct page *pag } } - VM_BUG_ON_PAGE(!first && (flags & RMAP_EXCLUSIVE), page); - VM_BUG_ON_PAGE(!first && PageAnonExclusive(page), page); - if (nr_pmdmapped) __lruvec_stat_mod_folio(folio, NR_ANON_THPS, nr_pmdmapped); if (nr) @@ -1252,6 +1249,8 @@ void page_add_anon_rmap(struct page *pag } if (flags & RMAP_EXCLUSIVE) SetPageAnonExclusive(page); + VM_WARN_ON_FOLIO(page_mapcount(page) > 1 && PageAnonExclusive(page), + folio); mlock_vma_folio(folio, vma, compound); } @@ -2532,15 +2531,14 @@ void hugepage_add_anon_rmap(struct page unsigned long address, rmap_t flags) { struct folio *folio = page_folio(page); - int first; VM_WARN_ON_FOLIO(!folio_test_anon(folio), folio); - first = atomic_inc_and_test(&folio->_entire_mapcount); - VM_BUG_ON_PAGE(!first && (flags & RMAP_EXCLUSIVE), page); - VM_BUG_ON_PAGE(!first && PageAnonExclusive(page), page); + atomic_inc(&folio->_entire_mapcount); if (flags & RMAP_EXCLUSIVE) SetPageAnonExclusive(page); + VM_WARN_ON_FOLIO(folio_entire_mapcount(folio) > 1 && + PageAnonExclusive(page), folio); } void hugepage_add_new_anon_rmap(struct folio *folio, _ Patches currently in -mm which might be from david@xxxxxxxxxx are mm-rmap-drop-stale-comment-in-page_add_anon_rmap-and-hugepage_add_anon_rmap.patch mm-rmap-move-setpageanonexclusive-out-of-__page_set_anon_rmap.patch mm-rmap-move-folio_test_anon-check-out-of-__folio_set_anon.patch mm-rmap-warn-on-new-pte-mapped-folios-in-page_add_anon_rmap.patch mm-rmap-simplify-pageanonexclusive-sanity-checks-when-adding-anon-rmap.patch mm-rmap-pass-folio-to-hugepage_add_anon_rmap.patch