The patch titled Subject: mm/damon/vaddr: convert hugetlb related functions to use a folio has been added to the -mm mm-unstable branch. Its filename is mm-damon-vaddr-convert-hugetlb-related-functions-to-use-a-folio.patch This patch will shortly appear at https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patches/mm-damon-vaddr-convert-hugetlb-related-functions-to-use-a-folio.patch This patch will later appear in the mm-unstable branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next via the mm-everything branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm and is updated there every 2-3 working days ------------------------------------------------------ From: Kefeng Wang <wangkefeng.wang@xxxxxxxxxx> Subject: mm/damon/vaddr: convert hugetlb related functions to use a folio Date: Fri, 30 Dec 2022 15:08:49 +0800 Convert damon_hugetlb_mkold() and damon_young_hugetlb_entry() to use a folio. Link: https://lkml.kernel.org/r/20221230070849.63358-9-wangkefeng.wang@xxxxxxxxxx Signed-off-by: Kefeng Wang <wangkefeng.wang@xxxxxxxxxx> Reviewed-by: SeongJae Park <sj@xxxxxxxxxx> Cc: David Hildenbrand <david@xxxxxxxxxx> Cc: Matthew Wilcox (Oracle) <willy@xxxxxxxxxxxxx> Cc: Vishal Moola (Oracle) <vishal.moola@xxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- --- a/mm/damon/vaddr.c~mm-damon-vaddr-convert-hugetlb-related-functions-to-use-a-folio +++ a/mm/damon/vaddr.c @@ -335,9 +335,9 @@ static void damon_hugetlb_mkold(pte_t *p { bool referenced = false; pte_t entry = huge_ptep_get(pte); - struct page *page = pte_page(entry); + struct folio *folio = pfn_folio(pte_pfn(entry)); - get_page(page); + folio_get(folio); if (pte_young(entry)) { referenced = true; @@ -352,10 +352,10 @@ static void damon_hugetlb_mkold(pte_t *p #endif /* CONFIG_MMU_NOTIFIER */ if (referenced) - set_page_young(page); + folio_set_young(folio); - set_page_idle(page); - put_page(page); + folio_set_idle(folio); + folio_put(folio); } static int damon_mkold_hugetlb_entry(pte_t *pte, unsigned long hmask, @@ -490,7 +490,7 @@ static int damon_young_hugetlb_entry(pte { struct damon_young_walk_private *priv = walk->private; struct hstate *h = hstate_vma(walk->vma); - struct page *page; + struct folio *folio; spinlock_t *ptl; pte_t entry; @@ -499,16 +499,16 @@ static int damon_young_hugetlb_entry(pte if (!pte_present(entry)) goto out; - page = pte_page(entry); - get_page(page); + folio = pfn_folio(pte_pfn(entry)); + folio_get(folio); - if (pte_young(entry) || !page_is_idle(page) || + if (pte_young(entry) || !folio_test_idle(folio) || mmu_notifier_test_young(walk->mm, addr)) { *priv->page_sz = huge_page_size(h); priv->young = true; } - put_page(page); + folio_put(folio); out: spin_unlock(ptl); _ Patches currently in -mm which might be from wangkefeng.wang@xxxxxxxxxx are mm-hwposion-support-recovery-from-ksm_might_need_to_copy.patch mm-hwposion-support-recovery-from-ksm_might_need_to_copy-v3.patch mm-huge_memory-convert-madvise_free_huge_pmd-to-use-a-folio.patch mm-swap-convert-mark_page_lazyfree-to-folio_mark_lazyfree.patch mm-huge_memory-convert-split_huge_pages_all-to-use-a-folio.patch mm-page_idle-convert-page-idle-to-use-a-folio.patch mm-damon-introduce-damon_get_folio.patch mm-damon-convert-damon_ptep-pmdp_mkold-to-use-a-folio.patch mm-damon-paddr-convert-damon_pa_-to-use-a-folio.patch mm-damon-vaddr-convert-damon_young_pmd_entry-to-use-a-folio.patch mm-damon-remove-unneeded-damon_get_page.patch mm-damon-vaddr-convert-hugetlb-related-functions-to-use-a-folio.patch