[folded-merged] mm-use-folio_add_new_anon_rmap-if-folio_test_anonfolio==false-fix-3.patch removed from -mm tree

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



The quilt patch titled
     Subject: mm: folio_add_new_anon_rmap() careful __folio_set_swapbacked()
has been removed from the -mm tree.  Its filename was
     mm-use-folio_add_new_anon_rmap-if-folio_test_anonfolio==false-fix-3.patch

This patch was dropped because it was folded into mm-use-folio_add_new_anon_rmap-if-folio_test_anonfolio==false.patch

------------------------------------------------------
From: Hugh Dickins <hughd@xxxxxxxxxx>
Subject: mm: folio_add_new_anon_rmap() careful __folio_set_swapbacked()
Date: Mon, 24 Jun 2024 22:00:24 -0700 (PDT)

Commit "mm: use folio_add_new_anon_rmap() if folio_test_anon(folio)==
false" has extended folio_add_new_anon_rmap() to use on non-exclusive
folios, already visible to others in swap cache and on LRU.

That renders its non-atomic __folio_set_swapbacked() unsafe: it risks
overwriting concurrent atomic operations on folio->flags, losing bits
added or restoring bits cleared.  Since it's only used in this risky way
when folio_test_locked and !folio_test_anon, many such races are excluded;
but, for example, isolations by folio_test_clear_lru() are vulnerable, and
setting or clearing active.

It could just use the atomic folio_set_swapbacked(); but this function
does try to avoid atomics where it can, so use a branch instead: just
avoid setting swapbacked when it is already set, that is good enough. 
(Swapbacked is normally stable once set: lazyfree can undo it, but only
later, when found anon in a page table.)

This fixes a lot of instability under compaction and swapping loads:
assorted "Bad page"s, VM_BUG_ON_FOLIO()s, apparently even page double
frees - though I've not worked out what races could lead to the latter.

Link: https://lkml.kernel.org/r/f3599b1d-8323-0dc5-e9e0-fdb3cfc3dd5a@xxxxxxxxxx
Signed-off-by: Hugh Dickins <hughd@xxxxxxxxxx>
Reviewed-by: David Hildenbrand <david@xxxxxxxxxx>
Cc: Baolin Wang <baolin.wang@xxxxxxxxxxxxxxxxx>
Cc: Barry Song <21cnbao@xxxxxxxxx>
Cc: Barry Song <v-songbaohua@xxxxxxxx>
Cc: Chris Li <chrisl@xxxxxxxxxx>
Cc: David Hildenbrand <david@xxxxxxxxxx>
Cc: "Huang, Ying" <ying.huang@xxxxxxxxx>
Cc: Matthew Wilcox (Oracle) <willy@xxxxxxxxxxxxx>
Cc: Michal Hocko <mhocko@xxxxxxxx>
Cc: Ryan Roberts <ryan.roberts@xxxxxxx>
Cc: Shuai Yuan <yuanshuai@xxxxxxxx>
Cc: Suren Baghdasaryan <surenb@xxxxxxxxxx>
Cc: Yang Shi <shy828301@xxxxxxxxx>
Cc: Yosry Ahmed <yosryahmed@xxxxxxxxxx>
Cc: Yu Zhao <yuzhao@xxxxxxxxxx>
Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>
---

 mm/rmap.c |    4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

--- a/mm/rmap.c~mm-use-folio_add_new_anon_rmap-if-folio_test_anonfolio==false-fix-3
+++ a/mm/rmap.c
@@ -1422,7 +1422,9 @@ void folio_add_new_anon_rmap(struct foli
 	VM_WARN_ON_FOLIO(!exclusive && !folio_test_locked(folio), folio);
 	VM_BUG_ON_VMA(address < vma->vm_start ||
 			address + (nr << PAGE_SHIFT) > vma->vm_end, vma);
-	__folio_set_swapbacked(folio);
+
+	if (!folio_test_swapbacked(folio))
+		__folio_set_swapbacked(folio);
 	__folio_set_anon(folio, vma, address, exclusive);
 
 	if (likely(!folio_test_large(folio))) {
_

Patches currently in -mm which might be from hughd@xxxxxxxxxx are

mm-use-folio_add_new_anon_rmap-if-folio_test_anonfolio==false.patch





[Index of Archives]     [Kernel Archive]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]

  Powered by Linux