The patch titled Subject: memcg-v1: no need for memcg locking for dirty tracking has been added to the -mm mm-unstable branch. Its filename is memcg-v1-no-need-for-memcg-locking-for-dirty-tracking.patch This patch will shortly appear at https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patches/memcg-v1-no-need-for-memcg-locking-for-dirty-tracking.patch This patch will later appear in the mm-unstable branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next via the mm-everything branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm and is updated there every 2-3 working days ------------------------------------------------------ From: Shakeel Butt <shakeel.butt@xxxxxxxxx> Subject: memcg-v1: no need for memcg locking for dirty tracking Date: Thu, 24 Oct 2024 18:23:00 -0700 During the era of memcg charge migration, the kernel has to be make sure that the dirty stat updates do not race with the charge migration. Otherwise it might update the dirty stats of the wrong memcg. Now with the memcg charge migration gone, there is no more race for dirty stat updates and the previous locking can be removed. Link: https://lkml.kernel.org/r/20241025012304.2473312-4-shakeel.butt@xxxxxxxxx Signed-off-by: Shakeel Butt <shakeel.butt@xxxxxxxxx> Acked-by: Michal Hocko <mhocko@xxxxxxxx> Reviewed-by: Roman Gushchin <roman.gushchin@xxxxxxxxx> Cc: Hugh Dickins <hughd@xxxxxxxxxx> Cc: Johannes Weiner <hannes@xxxxxxxxxxx> Cc: Muchun Song <muchun.song@xxxxxxxxx> Cc: Yosry Ahmed <yosryahmed@xxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- fs/buffer.c | 5 ----- mm/page-writeback.c | 16 +++------------- 2 files changed, 3 insertions(+), 18 deletions(-) --- a/fs/buffer.c~memcg-v1-no-need-for-memcg-locking-for-dirty-tracking +++ a/fs/buffer.c @@ -736,15 +736,12 @@ bool block_dirty_folio(struct address_sp * Lock out page's memcg migration to keep PageDirty * synchronized with per-memcg dirty page counters. */ - folio_memcg_lock(folio); newly_dirty = !folio_test_set_dirty(folio); spin_unlock(&mapping->i_private_lock); if (newly_dirty) __folio_mark_dirty(folio, mapping, 1); - folio_memcg_unlock(folio); - if (newly_dirty) __mark_inode_dirty(mapping->host, I_DIRTY_PAGES); @@ -1194,13 +1191,11 @@ void mark_buffer_dirty(struct buffer_hea struct folio *folio = bh->b_folio; struct address_space *mapping = NULL; - folio_memcg_lock(folio); if (!folio_test_set_dirty(folio)) { mapping = folio->mapping; if (mapping) __folio_mark_dirty(folio, mapping, 0); } - folio_memcg_unlock(folio); if (mapping) __mark_inode_dirty(mapping->host, I_DIRTY_PAGES); } --- a/mm/page-writeback.c~memcg-v1-no-need-for-memcg-locking-for-dirty-tracking +++ a/mm/page-writeback.c @@ -2743,8 +2743,6 @@ EXPORT_SYMBOL(noop_dirty_folio); /* * Helper function for set_page_dirty family. * - * Caller must hold folio_memcg_lock(). - * * NOTE: This relies on being atomic wrt interrupts. */ static void folio_account_dirtied(struct folio *folio, @@ -2777,7 +2775,6 @@ static void folio_account_dirtied(struct /* * Helper function for deaccounting dirty page without writeback. * - * Caller must hold folio_memcg_lock(). */ void folio_account_cleaned(struct folio *folio, struct bdi_writeback *wb) { @@ -2795,9 +2792,8 @@ void folio_account_cleaned(struct folio * If warn is true, then emit a warning if the folio is not uptodate and has * not been truncated. * - * The caller must hold folio_memcg_lock(). It is the caller's - * responsibility to prevent the folio from being truncated while - * this function is in progress, although it may have been truncated + * It is the caller's responsibility to prevent the folio from being truncated + * while this function is in progress, although it may have been truncated * before this function is called. Most callers have the folio locked. * A few have the folio blocked from truncation through other means (e.g. * zap_vma_pages() has it mapped and is holding the page table lock). @@ -2841,14 +2837,10 @@ void __folio_mark_dirty(struct folio *fo */ bool filemap_dirty_folio(struct address_space *mapping, struct folio *folio) { - folio_memcg_lock(folio); - if (folio_test_set_dirty(folio)) { - folio_memcg_unlock(folio); + if (folio_test_set_dirty(folio)) return false; - } __folio_mark_dirty(folio, mapping, !folio_test_private(folio)); - folio_memcg_unlock(folio); if (mapping->host) { /* !PageAnon && !swapper_space */ @@ -2975,14 +2967,12 @@ void __folio_cancel_dirty(struct folio * struct bdi_writeback *wb; struct wb_lock_cookie cookie = {}; - folio_memcg_lock(folio); wb = unlocked_inode_to_wb_begin(inode, &cookie); if (folio_test_clear_dirty(folio)) folio_account_cleaned(folio, wb); unlocked_inode_to_wb_end(inode, &cookie); - folio_memcg_unlock(folio); } else { folio_clear_dirty(folio); } _ Patches currently in -mm which might be from shakeel.butt@xxxxxxxxx are mm-optimize-truncation-of-shadow-entries.patch mm-optimize-invalidation-of-shadow-entries.patch mm-truncate-reset-xa_has_values-flag-on-each-iteration.patch memcg-add-tracing-for-memcg-stat-updates.patch memcg-add-tracing-for-memcg-stat-updates-v2.patch memcg-v1-fully-deprecate-move_charge_at_immigrate.patch memcg-v1-remove-charge-move-code.patch memcg-v1-no-need-for-memcg-locking-for-dirty-tracking.patch memcg-v1-no-need-for-memcg-locking-for-writeback-tracking.patch memcg-v1-no-need-for-memcg-locking-for-mglru.patch memcg-v1-remove-memcg-move-locking-code.patch