The patch titled Subject: mm/rmap: pass folio to hugepage_add_anon_rmap() has been added to the -mm mm-unstable branch. Its filename is mm-rmap-pass-folio-to-hugepage_add_anon_rmap.patch This patch will shortly appear at https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patches/mm-rmap-pass-folio-to-hugepage_add_anon_rmap.patch This patch will later appear in the mm-unstable branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next via the mm-everything branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm and is updated there every 2-3 working days ------------------------------------------------------ From: David Hildenbrand <david@xxxxxxxxxx> Subject: mm/rmap: pass folio to hugepage_add_anon_rmap() Date: Wed, 13 Sep 2023 14:51:13 +0200 Let's pass a folio; we are always mapping the entire thing. Link: https://lkml.kernel.org/r/20230913125113.313322-7-david@xxxxxxxxxx Signed-off-by: David Hildenbrand <david@xxxxxxxxxx> Cc: Matthew Wilcox <willy@xxxxxxxxxxxxx> Cc: Mike Kravetz <mike.kravetz@xxxxxxxxxx> Cc: Muchun Song <muchun.song@xxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- include/linux/rmap.h | 2 +- mm/migrate.c | 2 +- mm/rmap.c | 8 +++----- 3 files changed, 5 insertions(+), 7 deletions(-) --- a/include/linux/rmap.h~mm-rmap-pass-folio-to-hugepage_add_anon_rmap +++ a/include/linux/rmap.h @@ -203,7 +203,7 @@ void folio_add_file_rmap_range(struct fo void page_remove_rmap(struct page *, struct vm_area_struct *, bool compound); -void hugepage_add_anon_rmap(struct page *, struct vm_area_struct *, +void hugepage_add_anon_rmap(struct folio *, struct vm_area_struct *, unsigned long address, rmap_t flags); void hugepage_add_new_anon_rmap(struct folio *, struct vm_area_struct *, unsigned long address); --- a/mm/migrate.c~mm-rmap-pass-folio-to-hugepage_add_anon_rmap +++ a/mm/migrate.c @@ -247,7 +247,7 @@ static bool remove_migration_pte(struct pte = arch_make_huge_pte(pte, shift, vma->vm_flags); if (folio_test_anon(folio)) - hugepage_add_anon_rmap(new, vma, pvmw.address, + hugepage_add_anon_rmap(folio, vma, pvmw.address, rmap_flags); else page_dup_file_rmap(new, true); --- a/mm/rmap.c~mm-rmap-pass-folio-to-hugepage_add_anon_rmap +++ a/mm/rmap.c @@ -2527,18 +2527,16 @@ void rmap_walk_locked(struct folio *foli * * RMAP_COMPOUND is ignored. */ -void hugepage_add_anon_rmap(struct page *page, struct vm_area_struct *vma, +void hugepage_add_anon_rmap(struct folio *folio, struct vm_area_struct *vma, unsigned long address, rmap_t flags) { - struct folio *folio = page_folio(page); - VM_WARN_ON_FOLIO(!folio_test_anon(folio), folio); atomic_inc(&folio->_entire_mapcount); if (flags & RMAP_EXCLUSIVE) - SetPageAnonExclusive(page); + SetPageAnonExclusive(&folio->page); VM_WARN_ON_FOLIO(folio_entire_mapcount(folio) > 1 && - PageAnonExclusive(page), folio); + PageAnonExclusive(&folio->page), folio); } void hugepage_add_new_anon_rmap(struct folio *folio, _ Patches currently in -mm which might be from david@xxxxxxxxxx are mm-rmap-drop-stale-comment-in-page_add_anon_rmap-and-hugepage_add_anon_rmap.patch mm-rmap-move-setpageanonexclusive-out-of-__page_set_anon_rmap.patch mm-rmap-move-folio_test_anon-check-out-of-__folio_set_anon.patch mm-rmap-warn-on-new-pte-mapped-folios-in-page_add_anon_rmap.patch mm-rmap-simplify-pageanonexclusive-sanity-checks-when-adding-anon-rmap.patch mm-rmap-pass-folio-to-hugepage_add_anon_rmap.patch