The patch titled Subject: mm: migrate high-order folios in swap cache correctly has been added to the -mm mm-hotfixes-unstable branch. Its filename is mm-migrate-high-order-folios-in-swap-cache-correctly.patch This patch will shortly appear at https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patches/mm-migrate-high-order-folios-in-swap-cache-correctly.patch This patch will later appear in the mm-hotfixes-unstable branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next via the mm-everything branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm and is updated there every 2-3 working days ------------------------------------------------------ From: Charan Teja Kalla <quic_charante@xxxxxxxxxxx> Subject: mm: migrate high-order folios in swap cache correctly Date: Thu, 14 Dec 2023 04:58:41 +0000 Large folios occupy N consecutive entries in the swap cache instead of using multi-index entries like the page cache. However, if a large folio is re-added to the LRU list, it can be migrated. The migration code was not aware of the difference between the swap cache and the page cache and assumed that a single xas_store() would be sufficient. This leaves potentially many stale pointers to the now-migrated folio in the swap cache, which can lead to almost arbitrary data corruption in the future. This can also manifest as infinite loops with the RCU read lock held. [willy@xxxxxxxxxxxxx: modifications to the changelog & tweaked the fix] Fixes: 3417013e0d183be ("mm/migrate: Add folio_migrate_mapping()") Link: https://lkml.kernel.org/r/20231214045841.961776-1-willy@xxxxxxxxxxxxx Signed-off-by: Charan Teja Kalla <quic_charante@xxxxxxxxxxx> Signed-off-by: Matthew Wilcox (Oracle) <willy@xxxxxxxxxxxxx> Reported-by: Charan Teja Kalla <quic_charante@xxxxxxxxxxx> Closes: https://lkml.kernel.org/r/1700569840-17327-1-git-send-email-quic_charante@xxxxxxxxxxx Cc: David Hildenbrand <david@xxxxxxxxxx> Cc: Johannes Weiner <hannes@xxxxxxxxxxx> Cc: Kirill A. Shutemov <kirill.shutemov@xxxxxxxxxxxxxxx> Cc: Naoya Horiguchi <n-horiguchi@xxxxxxxxxxxxx> Cc: Shakeel Butt <shakeelb@xxxxxxxxxx> Cc: <stable@xxxxxxxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/migrate.c | 9 ++++++++- 1 file changed, 8 insertions(+), 1 deletion(-) --- a/mm/migrate.c~mm-migrate-high-order-folios-in-swap-cache-correctly +++ a/mm/migrate.c @@ -405,6 +405,7 @@ int folio_migrate_mapping(struct address int dirty; int expected_count = folio_expected_refs(mapping, folio) + extra_count; long nr = folio_nr_pages(folio); + long entries, i; if (!mapping) { /* Anonymous page without mapping */ @@ -442,8 +443,10 @@ int folio_migrate_mapping(struct address folio_set_swapcache(newfolio); newfolio->private = folio_get_private(folio); } + entries = nr; } else { VM_BUG_ON_FOLIO(folio_test_swapcache(folio), folio); + entries = 1; } /* Move dirty while page refs frozen and newpage not yet exposed */ @@ -453,7 +456,11 @@ int folio_migrate_mapping(struct address folio_set_dirty(newfolio); } - xas_store(&xas, newfolio); + /* Swap cache still stores N entries instead of a high-order entry */ + for (i = 0; i < entries; i++) { + xas_store(&xas, newfolio); + xas_next(&xas); + } /* * Drop cache reference from old page by unfreezing _ Patches currently in -mm which might be from quic_charante@xxxxxxxxxxx are mm-sparsemem-fix-race-in-accessing-memory_section-usage.patch mm-sparsemem-fix-race-in-accessing-memory_section-usage-v2.patch mm-migrate-high-order-folios-in-swap-cache-correctly.patch