The patch titled Subject: mm/khugepaged: convert hpage_collapse_scan_pmd() to use folios has been added to the -mm mm-unstable branch. Its filename is mm-khugepaged-convert-hpage_collapse_scan_pmd-to-use-folios.patch This patch will shortly appear at https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patches/mm-khugepaged-convert-hpage_collapse_scan_pmd-to-use-folios.patch This patch will later appear in the mm-unstable branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next via the mm-everything branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm and is updated there every 2-3 working days ------------------------------------------------------ From: "Vishal Moola (Oracle)" <vishal.moola@xxxxxxxxx> Subject: mm/khugepaged: convert hpage_collapse_scan_pmd() to use folios Date: Mon, 16 Oct 2023 13:05:07 -0700 Replaces 5 calls to compound_head(), and removes 1466 bytes of kernel text. Previously, to determine if any pte was shared, the page mapcount corresponding exactly to the pte was checked. This gave us a precise number of shared ptes. Using folio_estimated_sharers() instead uses the mapcount of the head page, giving us an estimate for tail page ptes. This means if a tail page's mapcount is greater than its head page's mapcount, folio_estimated_sharers() would be underestimating the number of shared ptes, and vice versa. Link: https://lkml.kernel.org/r/20231016200510.7387-3-vishal.moola@xxxxxxxxx Signed-off-by: Vishal Moola (Oracle) <vishal.moola@xxxxxxxxx> Cc: Matthew Wilcox (Oracle) <willy@xxxxxxxxxxxxx> Cc: Yang Shi <shy828301@xxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/khugepaged.c | 26 ++++++++++++-------------- 1 file changed, 12 insertions(+), 14 deletions(-) --- a/mm/khugepaged.c~mm-khugepaged-convert-hpage_collapse_scan_pmd-to-use-folios +++ a/mm/khugepaged.c @@ -1245,7 +1245,7 @@ static int hpage_collapse_scan_pmd(struc pte_t *pte, *_pte; int result = SCAN_FAIL, referenced = 0; int none_or_zero = 0, shared = 0; - struct page *page = NULL; + struct folio *folio = NULL; unsigned long _address; spinlock_t *ptl; int node = NUMA_NO_NODE, unmapped = 0; @@ -1316,13 +1316,13 @@ static int hpage_collapse_scan_pmd(struc if (pte_write(pteval)) writable = true; - page = vm_normal_page(vma, _address, pteval); - if (unlikely(!page) || unlikely(is_zone_device_page(page))) { + folio = vm_normal_folio(vma, _address, pteval); + if (unlikely(!folio) || unlikely(folio_is_zone_device(folio))) { result = SCAN_PAGE_NULL; goto out_unmap; } - if (page_mapcount(page) > 1) { + if (folio_estimated_sharers(folio) > 1) { ++shared; if (cc->is_khugepaged && shared > khugepaged_max_ptes_shared) { @@ -1332,29 +1332,27 @@ static int hpage_collapse_scan_pmd(struc } } - page = compound_head(page); - /* * Record which node the original page is from and save this * information to cc->node_load[]. * Khugepaged will allocate hugepage from the node has the max * hit record. */ - node = page_to_nid(page); + node = folio_nid(folio); if (hpage_collapse_scan_abort(node, cc)) { result = SCAN_SCAN_ABORT; goto out_unmap; } cc->node_load[node]++; - if (!PageLRU(page)) { + if (!folio_test_lru(folio)) { result = SCAN_PAGE_LRU; goto out_unmap; } - if (PageLocked(page)) { + if (folio_test_locked(folio)) { result = SCAN_PAGE_LOCK; goto out_unmap; } - if (!PageAnon(page)) { + if (!folio_test_anon(folio)) { result = SCAN_PAGE_ANON; goto out_unmap; } @@ -1369,7 +1367,7 @@ static int hpage_collapse_scan_pmd(struc * has excessive GUP pins (i.e. 512). Anyway the same check * will be done again later the risk seems low. */ - if (!is_refcount_suitable(page)) { + if (!is_refcount_suitable(&folio->page)) { result = SCAN_PAGE_COUNT; goto out_unmap; } @@ -1379,8 +1377,8 @@ static int hpage_collapse_scan_pmd(struc * enough young pte to justify collapsing the page */ if (cc->is_khugepaged && - (pte_young(pteval) || page_is_young(page) || - PageReferenced(page) || mmu_notifier_test_young(vma->vm_mm, + (pte_young(pteval) || folio_test_young(folio) || + folio_test_referenced(folio) || mmu_notifier_test_young(vma->vm_mm, address))) referenced++; } @@ -1402,7 +1400,7 @@ out_unmap: *mmap_locked = false; } out: - trace_mm_khugepaged_scan_pmd(mm, page, writable, referenced, + trace_mm_khugepaged_scan_pmd(mm, &folio->page, writable, referenced, none_or_zero, result, unmapped); return result; } _ Patches currently in -mm which might be from vishal.moola@xxxxxxxxx are mm-khugepaged-convert-__collapse_huge_page_isolate-to-use-folios.patch mm-khugepaged-convert-hpage_collapse_scan_pmd-to-use-folios.patch mm-khugepaged-convert-is_refcount_suitable-to-use-folios.patch mm-khugepaged-convert-alloc_charge_hpage-to-use-folios.patch mm-khugepaged-convert-collapse_pte_mapped_thp-to-use-folios.patch