Commit "mm: use folio_add_new_anon_rmap() if folio_test_anon(folio)== false" has extended folio_add_new_anon_rmap() to use on non-exclusive folios, already visible to others in swap cache and on LRU. That renders its non-atomic __folio_set_swapbacked() unsafe: it risks overwriting concurrent atomic operations on folio->flags, losing bits added or restoring bits cleared. Since it's only used in this risky way when folio_test_locked and !folio_test_anon, many such races are excluded; but, for example, isolations by folio_test_clear_lru() are vulnerable, and setting or clearing active. It could just use the atomic folio_set_swapbacked(); but this function does try to avoid atomics where it can, so use a branch instead: just avoid setting swapbacked when it is already set, that is good enough. (Swapbacked is normally stable once set: lazyfree can undo it, but only later, when found anon in a page table.) This fixes a lot of instability under compaction and swapping loads: assorted "Bad page"s, VM_BUG_ON_FOLIO()s, apparently even page double frees - though I've not worked out what races could lead to the latter. Signed-off-by: Hugh Dickins <hughd@xxxxxxxxxx> --- mm/rmap.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/mm/rmap.c b/mm/rmap.c index df1a43295c85..5394c1178bf1 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1408,7 +1408,9 @@ void folio_add_new_anon_rmap(struct folio *folio, struct vm_area_struct *vma, VM_WARN_ON_FOLIO(folio_test_hugetlb(folio), folio); VM_BUG_ON_VMA(address < vma->vm_start || address + (nr << PAGE_SHIFT) > vma->vm_end, vma); - __folio_set_swapbacked(folio); + + if (!folio_test_swapbacked(folio)) + __folio_set_swapbacked(folio); __folio_set_anon(folio, vma, address, exclusive); if (likely(!folio_test_large(folio))) { -- 2.35.3