The patch titled Subject: mm/huge_memory: work on folio->swap instead of page->private when splitting folio has been added to the -mm mm-unstable branch. Its filename is mm-huge_memory-work-on-folio-swap-instead-of-page-private-when-splitting-folio.patch This patch will shortly appear at https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patches/mm-huge_memory-work-on-folio-swap-instead-of-page-private-when-splitting-folio.patch This patch will later appear in the mm-unstable branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next via the mm-everything branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm and is updated there every 2-3 working days ------------------------------------------------------ From: David Hildenbrand <david@xxxxxxxxxx> Subject: mm/huge_memory: work on folio->swap instead of page->private when splitting folio Date: Mon, 21 Aug 2023 18:08:49 +0200 Let's work on folio->swap instead. While at it, use folio_test_anon() and folio_test_swapcache() -- the original folio remains valid even after splitting (but is then an order-0 folio). We can probably convert a lot more to folios in that code, let's focus on folio->swap handling only for now. Link: https://lkml.kernel.org/r/20230821160849.531668-5-david@xxxxxxxxxx Signed-off-by: David Hildenbrand <david@xxxxxxxxxx> Cc: Catalin Marinas <catalin.marinas@xxxxxxx> Cc: Dan Streetman <ddstreet@xxxxxxxx> Cc: Hugh Dickins <hughd@xxxxxxxxxx> Cc: Matthew Wilcox (Oracle) <willy@xxxxxxxxxxxxx> Cc: Peter Xu <peterx@xxxxxxxxxx> Cc: Seth Jennings <sjenning@xxxxxxxxxx> Cc: Vitaly Wool <vitaly.wool@xxxxxxxxxxxx> Cc: Will Deacon <will@xxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/huge_memory.c | 29 +++++++++++++++-------------- 1 file changed, 15 insertions(+), 14 deletions(-) --- a/mm/huge_memory.c~mm-huge_memory-work-on-folio-swap-instead-of-page-private-when-splitting-folio +++ a/mm/huge_memory.c @@ -2401,10 +2401,16 @@ static void lru_add_page_tail(struct pag } } -static void __split_huge_page_tail(struct page *head, int tail, +static void __split_huge_page_tail(struct folio *folio, int tail, struct lruvec *lruvec, struct list_head *list) { + struct page *head = &folio->page; struct page *page_tail = head + tail; + /* + * Careful: new_folio is not a "real" folio before we cleared PageTail. + * Don't pass it around before clear_compound_head(). + */ + struct folio *new_folio = (struct folio *)page_tail; VM_BUG_ON_PAGE(atomic_read(&page_tail->_mapcount) != -1, page_tail); @@ -2453,8 +2459,8 @@ static void __split_huge_page_tail(struc VM_WARN_ON_ONCE_PAGE(true, page_tail); page_tail->private = 0; } - if (PageSwapCache(head)) - set_page_private(page_tail, (unsigned long)head->private + tail); + if (folio_test_swapcache(folio)) + new_folio->swap.val = folio->swap.val + tail; /* Page flags must be visible before we make the page non-compound. */ smp_wmb(); @@ -2500,11 +2506,9 @@ static void __split_huge_page(struct pag /* complete memcg works before add pages to LRU */ split_page_memcg(head, nr); - if (PageAnon(head) && PageSwapCache(head)) { - swp_entry_t entry = { .val = page_private(head) }; - - offset = swp_offset(entry); - swap_cache = swap_address_space(entry); + if (folio_test_anon(folio) && folio_test_swapcache(folio)) { + offset = swp_offset(folio->swap); + swap_cache = swap_address_space(folio->swap); xa_lock(&swap_cache->i_pages); } @@ -2514,7 +2518,7 @@ static void __split_huge_page(struct pag ClearPageHasHWPoisoned(head); for (i = nr - 1; i >= 1; i--) { - __split_huge_page_tail(head, i, lruvec, list); + __split_huge_page_tail(folio, i, lruvec, list); /* Some pages can be beyond EOF: drop them from page cache */ if (head[i].index >= end) { struct folio *tail = page_folio(head + i); @@ -2559,11 +2563,8 @@ static void __split_huge_page(struct pag remap_page(folio, nr); - if (PageSwapCache(head)) { - swp_entry_t entry = { .val = page_private(head) }; - - split_swap_cluster(entry); - } + if (folio_test_swapcache(folio)) + split_swap_cluster(folio->swap); for (i = 0; i < nr; i++) { struct page *subpage = head + i; _ Patches currently in -mm which might be from david@xxxxxxxxxx are mm-gup-reintroduce-foll_numa-as-foll_honor_numa_fault.patch smaps-use-vm_normal_page_pmd-instead-of-follow_trans_huge_pmd.patch mm-gup-handle-cont-pte-hugetlb-pages-correctly-in-gup_must_unshare-via-gup-fast.patch kvm-explicitly-set-foll_honor_numa_fault-in-hva_to_pfn_slow.patch mm-gup-dont-implicitly-set-foll_honor_numa_fault.patch pgtable-improve-pte_protnone-comment.patch selftest-mm-ksm_functional_tests-test-in-mmap_and_merge_range-if-anything-got-merged.patch selftest-mm-ksm_functional_tests-add-prot_none-test.patch selftest-mm-ksm_functional_tests-add-prot_none-test-fix.patch mm-swap-stop-using-page-private-on-tail-pages-for-thp_swap.patch mm-swap-inline-folio_set_swap_entry-and-folio_swap_entry.patch mm-huge_memory-work-on-folio-swap-instead-of-page-private-when-splitting-folio.patch