The patch titled Subject: ksm: use a folio in try_to_merge_one_page() has been added to the -mm mm-unstable branch. Its filename is ksm-use-a-folio-in-try_to_merge_one_page.patch This patch will shortly appear at https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patches/ksm-use-a-folio-in-try_to_merge_one_page.patch This patch will later appear in the mm-unstable branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next via the mm-everything branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm and is updated there every 2-3 working days ------------------------------------------------------ From: "Matthew Wilcox (Oracle)" <willy@xxxxxxxxxxxxx> Subject: ksm: use a folio in try_to_merge_one_page() Date: Wed, 2 Oct 2024 16:25:27 +0100 Patch series "Remove PageKsm()". The KSM flag is almost always tested on the folio rather than on the page. This series removes the final users of PageKsm() and makes the flag only This patch (of 5): It is safe to use a folio here because all callers took a refcount on this page. The one wrinkle is that we have to recalculate the value of folio after splitting the page, since it has probably changed. Replaces nine calls to compound_head() with one. Link: https://lkml.kernel.org/r/20241002152533.1350629-1-willy@xxxxxxxxxxxxx Link: https://lkml.kernel.org/r/20241002152533.1350629-2-willy@xxxxxxxxxxxxx Signed-off-by: Matthew Wilcox (Oracle) <willy@xxxxxxxxxxxxx> Cc: Alex Shi <alexs@xxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/ksm.c | 33 +++++++++++++++++---------------- 1 file changed, 17 insertions(+), 16 deletions(-) --- a/mm/ksm.c~ksm-use-a-folio-in-try_to_merge_one_page +++ a/mm/ksm.c @@ -1442,28 +1442,29 @@ out: static int try_to_merge_one_page(struct vm_area_struct *vma, struct page *page, struct page *kpage) { + struct folio *folio = page_folio(page); pte_t orig_pte = __pte(0); int err = -EFAULT; if (page == kpage) /* ksm page forked */ return 0; - if (!PageAnon(page)) + if (!folio_test_anon(folio)) goto out; /* * We need the folio lock to read a stable swapcache flag in - * write_protect_page(). We use trylock_page() instead of - * lock_page() because we don't want to wait here - we - * prefer to continue scanning and merging different pages, - * then come back to this page when it is unlocked. + * write_protect_page(). We trylock because we don't want to wait + * here - we prefer to continue scanning and merging different + * pages, then come back to this page when it is unlocked. */ - if (!trylock_page(page)) + if (!folio_trylock(folio)) goto out; - if (PageTransCompound(page)) { + if (folio_test_large(folio)) { if (split_huge_page(page)) goto out_unlock; + folio = page_folio(page); } /* @@ -1472,28 +1473,28 @@ static int try_to_merge_one_page(struct * ptes are necessarily already write-protected. But in either * case, we need to lock and check page_count is not raised. */ - if (write_protect_page(vma, page_folio(page), &orig_pte) == 0) { + if (write_protect_page(vma, folio, &orig_pte) == 0) { if (!kpage) { /* - * While we hold page lock, upgrade page from - * PageAnon+anon_vma to PageKsm+NULL stable_node: + * While we hold folio lock, upgrade folio from + * anon to a NULL stable_node with the KSM flag set: * stable_tree_insert() will update stable_node. */ - folio_set_stable_node(page_folio(page), NULL); - mark_page_accessed(page); + folio_set_stable_node(folio, NULL); + folio_mark_accessed(folio); /* - * Page reclaim just frees a clean page with no dirty + * Page reclaim just frees a clean folio with no dirty * ptes: make sure that the ksm page would be swapped. */ - if (!PageDirty(page)) - SetPageDirty(page); + if (!folio_test_dirty(folio)) + folio_mark_dirty(folio); err = 0; } else if (pages_identical(page, kpage)) err = replace_page(vma, page, kpage, orig_pte); } out_unlock: - unlock_page(page); + folio_unlock(folio); out: return err; } _ Patches currently in -mm which might be from willy@xxxxxxxxxxxxx are ksm-use-a-folio-in-try_to_merge_one_page.patch ksm-convert-cmp_and_merge_page-to-use-a-folio.patch ksm-convert-should_skip_rmap_item-to-take-a-folio.patch mm-add-pageanonnotksm.patch mm-remove-pageksm.patch