The quilt patch titled Subject: hugetlb: convert hugetlb_vma_maps_page() to hugetlb_vma_maps_pfn() has been removed from the -mm tree. Its filename was hugetlb-convert-hugetlb_vma_maps_page-to-hugetlb_vma_maps_pfn.patch This patch was dropped because it was merged into the mm-stable branch of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm ------------------------------------------------------ From: "Matthew Wilcox (Oracle)" <willy@xxxxxxxxxxxxx> Subject: hugetlb: convert hugetlb_vma_maps_page() to hugetlb_vma_maps_pfn() Date: Wed, 26 Feb 2025 16:31:29 +0000 pte_page() is more expensive than pte_pfn() (often it's defined as pfn_to_page(pte_pfn())), so it makes sense to do the conversion to pfn once (by calling folio_pfn()) rather than convert the pfn to a page each time. While this is a very small advantage, the main motivation is removing a reference to folio->page. Link: https://lkml.kernel.org/r/20250226163131.3795869-1-willy@xxxxxxxxxxxxx Signed-off-by: Matthew Wilcox (Oracle) <willy@xxxxxxxxxxxxx> Cc: Muchun Song <muchun.song@xxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- fs/hugetlbfs/inode.c | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) --- a/fs/hugetlbfs/inode.c~hugetlb-convert-hugetlb_vma_maps_page-to-hugetlb_vma_maps_pfn +++ a/fs/hugetlbfs/inode.c @@ -338,8 +338,8 @@ static void hugetlb_delete_from_page_cac * mutex for the page in the mapping. So, we can not race with page being * faulted into the vma. */ -static bool hugetlb_vma_maps_page(struct vm_area_struct *vma, - unsigned long addr, struct page *page) +static bool hugetlb_vma_maps_pfn(struct vm_area_struct *vma, + unsigned long addr, unsigned long pfn) { pte_t *ptep, pte; @@ -351,7 +351,7 @@ static bool hugetlb_vma_maps_page(struct if (huge_pte_none(pte) || !pte_present(pte)) return false; - if (pte_page(pte) == page) + if (pte_pfn(pte) == pfn) return true; return false; @@ -396,7 +396,7 @@ static void hugetlb_unmap_file_folio(str { struct rb_root_cached *root = &mapping->i_mmap; struct hugetlb_vma_lock *vma_lock; - struct page *page = &folio->page; + unsigned long pfn = folio_pfn(folio); struct vm_area_struct *vma; unsigned long v_start; unsigned long v_end; @@ -412,7 +412,7 @@ retry: v_start = vma_offset_start(vma, start); v_end = vma_offset_end(vma, end); - if (!hugetlb_vma_maps_page(vma, v_start, page)) + if (!hugetlb_vma_maps_pfn(vma, v_start, pfn)) continue; if (!hugetlb_vma_trylock_write(vma)) { @@ -462,7 +462,7 @@ retry: */ v_start = vma_offset_start(vma, start); v_end = vma_offset_end(vma, end); - if (hugetlb_vma_maps_page(vma, v_start, page)) + if (hugetlb_vma_maps_pfn(vma, v_start, pfn)) unmap_hugepage_range(vma, v_start, v_end, NULL, ZAP_FLAG_DROP_MARKER); _ Patches currently in -mm which might be from willy@xxxxxxxxxxxxx are mm-separate-folio_split_memcg_refs-from-split_page_memcg.patch mm-simplify-split_page_memcg.patch mm-remove-references-to-folio-in-split_page_memcg.patch mm-simplify-folio_memcg_charged.patch mm-remove-references-to-folio-in-__memcg_kmem_uncharge_page.patch