The patch titled Subject: mm: folio_add_new_anon_rmap() careful __folio_set_swapbacked() has been added to the -mm mm-unstable branch. Its filename is mm-use-folio_add_new_anon_rmap-if-folio_test_anonfolio==false-fix-3.patch This patch will shortly appear at https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patches/mm-use-folio_add_new_anon_rmap-if-folio_test_anonfolio==false-fix-3.patch This patch will later appear in the mm-unstable branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next via the mm-everything branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm and is updated there every 2-3 working days ------------------------------------------------------ From: Hugh Dickins <hughd@xxxxxxxxxx> Subject: mm: folio_add_new_anon_rmap() careful __folio_set_swapbacked() Date: Mon, 24 Jun 2024 22:00:24 -0700 (PDT) Commit "mm: use folio_add_new_anon_rmap() if folio_test_anon(folio)== false" has extended folio_add_new_anon_rmap() to use on non-exclusive folios, already visible to others in swap cache and on LRU. That renders its non-atomic __folio_set_swapbacked() unsafe: it risks overwriting concurrent atomic operations on folio->flags, losing bits added or restoring bits cleared. Since it's only used in this risky way when folio_test_locked and !folio_test_anon, many such races are excluded; but, for example, isolations by folio_test_clear_lru() are vulnerable, and setting or clearing active. It could just use the atomic folio_set_swapbacked(); but this function does try to avoid atomics where it can, so use a branch instead: just avoid setting swapbacked when it is already set, that is good enough. (Swapbacked is normally stable once set: lazyfree can undo it, but only later, when found anon in a page table.) This fixes a lot of instability under compaction and swapping loads: assorted "Bad page"s, VM_BUG_ON_FOLIO()s, apparently even page double frees - though I've not worked out what races could lead to the latter. Link: https://lkml.kernel.org/r/f3599b1d-8323-0dc5-e9e0-fdb3cfc3dd5a@xxxxxxxxxx Signed-off-by: Hugh Dickins <hughd@xxxxxxxxxx> Reviewed-by: David Hildenbrand <david@xxxxxxxxxx> Cc: Baolin Wang <baolin.wang@xxxxxxxxxxxxxxxxx> Cc: Barry Song <21cnbao@xxxxxxxxx> Cc: Barry Song <v-songbaohua@xxxxxxxx> Cc: Chris Li <chrisl@xxxxxxxxxx> Cc: David Hildenbrand <david@xxxxxxxxxx> Cc: "Huang, Ying" <ying.huang@xxxxxxxxx> Cc: Matthew Wilcox (Oracle) <willy@xxxxxxxxxxxxx> Cc: Michal Hocko <mhocko@xxxxxxxx> Cc: Ryan Roberts <ryan.roberts@xxxxxxx> Cc: Shuai Yuan <yuanshuai@xxxxxxxx> Cc: Suren Baghdasaryan <surenb@xxxxxxxxxx> Cc: Yang Shi <shy828301@xxxxxxxxx> Cc: Yosry Ahmed <yosryahmed@xxxxxxxxxx> Cc: Yu Zhao <yuzhao@xxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/rmap.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) --- a/mm/rmap.c~mm-use-folio_add_new_anon_rmap-if-folio_test_anonfolio==false-fix-3 +++ a/mm/rmap.c @@ -1422,7 +1422,9 @@ void folio_add_new_anon_rmap(struct foli VM_WARN_ON_FOLIO(!exclusive && !folio_test_locked(folio), folio); VM_BUG_ON_VMA(address < vma->vm_start || address + (nr << PAGE_SHIFT) > vma->vm_end, vma); - __folio_set_swapbacked(folio); + + if (!folio_test_swapbacked(folio)) + __folio_set_swapbacked(folio); __folio_set_anon(folio, vma, address, exclusive); if (likely(!folio_test_large(folio))) { _ Patches currently in -mm which might be from hughd@xxxxxxxxxx are mm-migrate-folio_ref_freeze-under-xas_lock_irq.patch mm-use-folio_add_new_anon_rmap-if-folio_test_anonfolio==false-fix-3.patch