The patch titled Subject: mm/khugepaged: convert is_refcount_suitable() to use folios has been added to the -mm mm-unstable branch. Its filename is mm-khugepaged-convert-is_refcount_suitable-to-use-folios.patch This patch will shortly appear at https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patches/mm-khugepaged-convert-is_refcount_suitable-to-use-folios.patch This patch will later appear in the mm-unstable branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next via the mm-everything branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm and is updated there every 2-3 working days ------------------------------------------------------ From: "Vishal Moola (Oracle)" <vishal.moola@xxxxxxxxx> Subject: mm/khugepaged: convert is_refcount_suitable() to use folios Date: Fri, 20 Oct 2023 11:33:29 -0700 Both callers of is_refcount_suitable() have been converted to use folios, so convert it to take in a folio. Both callers only operate on head pages of folios so mapcount/refcount conversions here are trivial. Removes 3 calls to compound head, and removes 315 bytes of kernel text. Link: https://lkml.kernel.org/r/20231020183331.10770-4-vishal.moola@xxxxxxxxx Signed-off-by: Vishal Moola (Oracle) <vishal.moola@xxxxxxxxx> Cc: Kefeng Wang <wangkefeng.wang@xxxxxxxxxx> Cc: Matthew Wilcox (Oracle) <willy@xxxxxxxxxxxxx> Cc: Yang Shi <shy828301@xxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/khugepaged.c | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) --- a/mm/khugepaged.c~mm-khugepaged-convert-is_refcount_suitable-to-use-folios +++ a/mm/khugepaged.c @@ -524,15 +524,15 @@ static void release_pte_pages(pte_t *pte } } -static bool is_refcount_suitable(struct page *page) +static bool is_refcount_suitable(struct folio *folio) { int expected_refcount; - expected_refcount = total_mapcount(page); - if (PageSwapCache(page)) - expected_refcount += compound_nr(page); + expected_refcount = folio_mapcount(folio); + if (folio_test_swapcache(folio)) + expected_refcount += folio_nr_pages(folio); - return page_count(page) == expected_refcount; + return folio_ref_count(folio) == expected_refcount; } static int __collapse_huge_page_isolate(struct vm_area_struct *vma, @@ -625,7 +625,7 @@ static int __collapse_huge_page_isolate( * but not from this process. The other process cannot write to * the page, only trigger CoW. */ - if (!is_refcount_suitable(&folio->page)) { + if (!is_refcount_suitable(folio)) { folio_unlock(folio); result = SCAN_PAGE_COUNT; goto out; @@ -1371,7 +1371,7 @@ static int hpage_collapse_scan_pmd(struc * has excessive GUP pins (i.e. 512). Anyway the same check * will be done again later the risk seems low. */ - if (!is_refcount_suitable(&folio->page)) { + if (!is_refcount_suitable(folio)) { result = SCAN_PAGE_COUNT; goto out_unmap; } _ Patches currently in -mm which might be from vishal.moola@xxxxxxxxx are mm-khugepaged-convert-__collapse_huge_page_isolate-to-use-folios.patch mm-khugepaged-convert-hpage_collapse_scan_pmd-to-use-folios.patch mm-khugepaged-convert-is_refcount_suitable-to-use-folios.patch mm-khugepaged-convert-alloc_charge_hpage-to-use-folios.patch mm-khugepaged-convert-collapse_pte_mapped_thp-to-use-folios.patch