The patch titled Subject: mm/damon/vaddr: convert damon_young_pmd_entry() to use a folio has been added to the -mm mm-unstable branch. Its filename is mm-damon-vaddr-convert-damon_young_pmd_entry-to-use-a-folio.patch This patch will shortly appear at https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patches/mm-damon-vaddr-convert-damon_young_pmd_entry-to-use-a-folio.patch This patch will later appear in the mm-unstable branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next via the mm-everything branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm and is updated there every 2-3 working days ------------------------------------------------------ From: Kefeng Wang <wangkefeng.wang@xxxxxxxxxx> Subject: mm/damon/vaddr: convert damon_young_pmd_entry() to use a folio Date: Fri, 30 Dec 2022 15:08:47 +0800 With damon_get_folio(), let's convert damon_young_pmd_entry() to use a folio. Link: https://lkml.kernel.org/r/20221230070849.63358-7-wangkefeng.wang@xxxxxxxxxx Signed-off-by: Kefeng Wang <wangkefeng.wang@xxxxxxxxxx> Reviewed-by: SeongJae Park <sj@xxxxxxxxxx> Cc: David Hildenbrand <david@xxxxxxxxxx> Cc: Matthew Wilcox (Oracle) <willy@xxxxxxxxxxxxx> Cc: Vishal Moola (Oracle) <vishal.moola@xxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- --- a/mm/damon/vaddr.c~mm-damon-vaddr-convert-damon_young_pmd_entry-to-use-a-folio +++ a/mm/damon/vaddr.c @@ -431,7 +431,7 @@ static int damon_young_pmd_entry(pmd_t * { pte_t *pte; spinlock_t *ptl; - struct page *page; + struct folio *folio; struct damon_young_walk_private *priv = walk->private; #ifdef CONFIG_TRANSPARENT_HUGEPAGE @@ -446,16 +446,16 @@ static int damon_young_pmd_entry(pmd_t * spin_unlock(ptl); goto regular_page; } - page = damon_get_page(pmd_pfn(*pmd)); - if (!page) + folio = damon_get_folio(pmd_pfn(*pmd)); + if (!folio) goto huge_out; - if (pmd_young(*pmd) || !page_is_idle(page) || + if (pmd_young(*pmd) || !folio_test_idle(folio) || mmu_notifier_test_young(walk->mm, addr)) { *priv->page_sz = HPAGE_PMD_SIZE; priv->young = true; } - put_page(page); + folio_put(folio); huge_out: spin_unlock(ptl); return 0; @@ -469,15 +469,15 @@ regular_page: pte = pte_offset_map_lock(walk->mm, pmd, addr, &ptl); if (!pte_present(*pte)) goto out; - page = damon_get_page(pte_pfn(*pte)); - if (!page) + folio = damon_get_folio(pte_pfn(*pte)); + if (!folio) goto out; - if (pte_young(*pte) || !page_is_idle(page) || + if (pte_young(*pte) || !folio_test_idle(folio) || mmu_notifier_test_young(walk->mm, addr)) { *priv->page_sz = PAGE_SIZE; priv->young = true; } - put_page(page); + folio_put(folio); out: pte_unmap_unlock(pte, ptl); return 0; _ Patches currently in -mm which might be from wangkefeng.wang@xxxxxxxxxx are mm-hwposion-support-recovery-from-ksm_might_need_to_copy.patch mm-hwposion-support-recovery-from-ksm_might_need_to_copy-v3.patch mm-huge_memory-convert-madvise_free_huge_pmd-to-use-a-folio.patch mm-swap-convert-mark_page_lazyfree-to-folio_mark_lazyfree.patch mm-huge_memory-convert-split_huge_pages_all-to-use-a-folio.patch mm-page_idle-convert-page-idle-to-use-a-folio.patch mm-damon-introduce-damon_get_folio.patch mm-damon-convert-damon_ptep-pmdp_mkold-to-use-a-folio.patch mm-damon-paddr-convert-damon_pa_-to-use-a-folio.patch mm-damon-vaddr-convert-damon_young_pmd_entry-to-use-a-folio.patch mm-damon-remove-unneeded-damon_get_page.patch mm-damon-vaddr-convert-hugetlb-related-functions-to-use-a-folio.patch