The patch titled Subject: memcg-v1: no need for memcg locking for MGLRU has been added to the -mm mm-unstable branch. Its filename is memcg-v1-no-need-for-memcg-locking-for-mglru.patch This patch will shortly appear at https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patches/memcg-v1-no-need-for-memcg-locking-for-mglru.patch This patch will later appear in the mm-unstable branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next via the mm-everything branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm and is updated there every 2-3 working days ------------------------------------------------------ From: Shakeel Butt <shakeel.butt@xxxxxxxxx> Subject: memcg-v1: no need for memcg locking for MGLRU Date: Thu, 24 Oct 2024 18:23:02 -0700 While updating the generation of the folios, MGLRU requires that the folio's memcg association remains stable. With the charge migration deprecated, there is no need for MGLRU to acquire locks to keep the folio and memcg association stable. Link: https://lkml.kernel.org/r/20241025012304.2473312-6-shakeel.butt@xxxxxxxxx Signed-off-by: Shakeel Butt <shakeel.butt@xxxxxxxxx> Reviewed-by: Roman Gushchin <roman.gushchin@xxxxxxxxx> Cc: Hugh Dickins <hughd@xxxxxxxxxx> Cc: Johannes Weiner <hannes@xxxxxxxxxxx> Cc: Michal Hocko <mhocko@xxxxxxxx> Cc: Muchun Song <muchun.song@xxxxxxxxx> Cc: Yosry Ahmed <yosryahmed@xxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/vmscan.c | 11 ----------- 1 file changed, 11 deletions(-) --- a/mm/vmscan.c~memcg-v1-no-need-for-memcg-locking-for-mglru +++ a/mm/vmscan.c @@ -3662,10 +3662,6 @@ static void walk_mm(struct mm_struct *mm if (walk->seq != max_seq) break; - /* folio_update_gen() requires stable folio_memcg() */ - if (!mem_cgroup_trylock_pages(memcg)) - break; - /* the caller might be holding the lock for write */ if (mmap_read_trylock(mm)) { err = walk_page_range(mm, walk->next_addr, ULONG_MAX, &mm_walk_ops, walk); @@ -3673,8 +3669,6 @@ static void walk_mm(struct mm_struct *mm mmap_read_unlock(mm); } - mem_cgroup_unlock_pages(); - if (walk->batched) { spin_lock_irq(&lruvec->lru_lock); reset_batch_size(walk); @@ -4096,10 +4090,6 @@ bool lru_gen_look_around(struct page_vma } } - /* folio_update_gen() requires stable folio_memcg() */ - if (!mem_cgroup_trylock_pages(memcg)) - return true; - arch_enter_lazy_mmu_mode(); pte -= (addr - start) / PAGE_SIZE; @@ -4144,7 +4134,6 @@ bool lru_gen_look_around(struct page_vma } arch_leave_lazy_mmu_mode(); - mem_cgroup_unlock_pages(); /* feedback from rmap walkers to page table walkers */ if (mm_state && suitable_to_scan(i, young)) _ Patches currently in -mm which might be from shakeel.butt@xxxxxxxxx are mm-optimize-truncation-of-shadow-entries.patch mm-optimize-invalidation-of-shadow-entries.patch mm-truncate-reset-xa_has_values-flag-on-each-iteration.patch memcg-add-tracing-for-memcg-stat-updates.patch memcg-add-tracing-for-memcg-stat-updates-v2.patch memcg-v1-fully-deprecate-move_charge_at_immigrate.patch memcg-v1-remove-charge-move-code.patch memcg-v1-no-need-for-memcg-locking-for-dirty-tracking.patch memcg-v1-no-need-for-memcg-locking-for-writeback-tracking.patch memcg-v1-no-need-for-memcg-locking-for-mglru.patch memcg-v1-remove-memcg-move-locking-code.patch