The patch titled Subject: mm: rmap: remove lock_page_memcg() has been added to the -mm mm-unstable branch. Its filename is mm-rmap-remove-lock_page_memcg.patch This patch will shortly appear at https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patches/mm-rmap-remove-lock_page_memcg.patch This patch will later appear in the mm-unstable branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next via the mm-everything branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm and is updated there every 2-3 working days ------------------------------------------------------ From: Johannes Weiner <hannes@xxxxxxxxxxx> Subject: mm: rmap: remove lock_page_memcg() Date: Tue, 6 Dec 2022 18:13:40 +0100 The previous patch made sure charge moving only touches pages for which page_mapped() is stable. lock_page_memcg() is no longer needed. Link: https://lkml.kernel.org/r/20221206171340.139790-3-hannes@xxxxxxxxxxx Signed-off-by: Johannes Weiner <hannes@xxxxxxxxxxx> Acked-by: Hugh Dickins <hughd@xxxxxxxxxx> Acked-by: Shakeel Butt <shakeelb@xxxxxxxxxx> Acked-by: Michal Hocko <mhocko@xxxxxxxx> Cc: Muchun Song <songmuchun@xxxxxxxxxxxxx> Cc: Roman Gushchin <roman.gushchin@xxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/rmap.c | 26 ++++++++------------------ 1 file changed, 8 insertions(+), 18 deletions(-) --- a/mm/rmap.c~mm-rmap-remove-lock_page_memcg +++ a/mm/rmap.c @@ -1222,9 +1222,6 @@ void page_add_anon_rmap(struct page *pag bool compound = flags & RMAP_COMPOUND; bool first = true; - if (unlikely(PageKsm(page))) - lock_page_memcg(page); - /* Is page being mapped by PTE? Is this its first map to be added? */ if (likely(!compound)) { first = atomic_inc_and_test(&page->_mapcount); @@ -1262,15 +1259,14 @@ void page_add_anon_rmap(struct page *pag if (nr) __mod_lruvec_page_state(page, NR_ANON_MAPPED, nr); - if (unlikely(PageKsm(page))) - unlock_page_memcg(page); - - /* address might be in next vma when migration races vma_adjust */ - else if (first) - __page_set_anon_rmap(page, vma, address, - !!(flags & RMAP_EXCLUSIVE)); - else - __page_check_anon_rmap(page, vma, address); + if (likely(!PageKsm(page))) { + /* address might be in next vma when migration races vma_adjust */ + if (first) + __page_set_anon_rmap(page, vma, address, + !!(flags & RMAP_EXCLUSIVE)); + else + __page_check_anon_rmap(page, vma, address); + } mlock_vma_page(page, vma, compound); } @@ -1329,7 +1325,6 @@ void page_add_file_rmap(struct page *pag bool first; VM_BUG_ON_PAGE(compound && !PageTransHuge(page), page); - lock_page_memcg(page); /* Is page being mapped by PTE? Is this its first map to be added? */ if (likely(!compound)) { @@ -1365,7 +1360,6 @@ void page_add_file_rmap(struct page *pag NR_SHMEM_PMDMAPPED : NR_FILE_PMDMAPPED, nr_pmdmapped); if (nr) __mod_lruvec_page_state(page, NR_FILE_MAPPED, nr); - unlock_page_memcg(page); mlock_vma_page(page, vma, compound); } @@ -1394,8 +1388,6 @@ void page_remove_rmap(struct page *page, return; } - lock_page_memcg(page); - /* Is page being unmapped by PTE? Is this its last map to be removed? */ if (likely(!compound)) { last = atomic_add_negative(-1, &page->_mapcount); @@ -1451,8 +1443,6 @@ void page_remove_rmap(struct page *page, * and remember that it's only reliable while mapped. */ - unlock_page_memcg(page); - munlock_vma_page(page, vma, compound); } _ Patches currently in -mm which might be from hannes@xxxxxxxxxxx are mm-memcontrol-skip-moving-non-present-pages-that-are-mapped-elsewhere.patch mm-rmap-remove-lock_page_memcg.patch mm-memcontrol-deprecate-charge-moving.patch