On Wed, Mar 06, 2024 at 10:50:52AM -0500, Zi Yan wrote: > From: Zi Yan <ziy@xxxxxxxxxx> > > The tail pages in a THP can have swap entry information stored in their > private field. When migrating to a new page, all tail pages of the new > page need to update ->private to avoid future data corruption. > > This fix is stable-only, since after commit 07e09c483cbe ("mm/huge_memory: > work on folio->swap instead of page->private when splitting folio"), > subpages of a swapcached THP no longer requires the maintenance. > > Adding THPs to the swapcache was introduced in commit > 38d8b4e6bdc87 ("mm, THP, swap: delay splitting THP during swap out"), > where each subpage of a THP added to the swapcache had its own swapcache > entry and required the ->private field to point to the correct swapcache > entry. Later, when THP migration functionality was implemented in commit > 616b8371539a6 ("mm: thp: enable thp migration in generic path"), > it initially did not handle the subpages of swapcached THPs, failing to > update their ->private fields or replace the subpage pointers in the > swapcache. Subsequently, commit e71769ae5260 ("mm: enable thp migration > for shmem thp") addressed the swapcache update aspect. This patch fixes > the update of subpage ->private fields. > > Closes: https://lore.kernel.org/linux-mm/1707814102-22682-1-git-send-email-quic_charante@xxxxxxxxxxx/ > Fixes: 616b8371539a ("mm: thp: enable thp migration in generic path") > Signed-off-by: Zi Yan <ziy@xxxxxxxxxx> > Acked-by: David Hildenbrand <david@xxxxxxxxxx> > --- > mm/migrate.c | 6 +++++- > 1 file changed, 5 insertions(+), 1 deletion(-) > > diff --git a/mm/migrate.c b/mm/migrate.c > index 171573613c39..893ea04498f7 100644 > --- a/mm/migrate.c > +++ b/mm/migrate.c > @@ -514,8 +514,12 @@ int migrate_page_move_mapping(struct address_space *mapping, > if (PageSwapBacked(page)) { > __SetPageSwapBacked(newpage); > if (PageSwapCache(page)) { > + int i; > + > SetPageSwapCache(newpage); > - set_page_private(newpage, page_private(page)); > + for (i = 0; i < (1 << compound_order(page)); i++) > + set_page_private(newpage + i, > + page_private(page + i)); > } > } else { > VM_BUG_ON_PAGE(PageSwapCache(page), page); > -- > 2.43.0 > > All now queued up, thanks. greg k-h