The quilt patch titled Subject: mm/damon: introduce damon_get_folio() has been removed from the -mm tree. Its filename was mm-damon-introduce-damon_get_folio.patch This patch was dropped because it was merged into the mm-stable branch of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm ------------------------------------------------------ From: Kefeng Wang <wangkefeng.wang@xxxxxxxxxx> Subject: mm/damon: introduce damon_get_folio() Date: Fri, 30 Dec 2022 15:08:44 +0800 Introduce damon_get_folio(), and the temporary wrapper function damon_get_page(), which help us to convert damon related functions to use folios, and it will be dropped once the conversion is completed. Link: https://lkml.kernel.org/r/20221230070849.63358-4-wangkefeng.wang@xxxxxxxxxx Signed-off-by: Kefeng Wang <wangkefeng.wang@xxxxxxxxxx> Reviewed-by: SeongJae Park <sj@xxxxxxxxxx> Cc: David Hildenbrand <david@xxxxxxxxxx> Cc: Matthew Wilcox (Oracle) <willy@xxxxxxxxxxxxx> Cc: Vishal Moola (Oracle) <vishal.moola@xxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/damon/ops-common.c | 18 +++++++++++------- mm/damon/ops-common.h | 9 ++++++++- 2 files changed, 19 insertions(+), 8 deletions(-) --- a/mm/damon/ops-common.c~mm-damon-introduce-damon_get_folio +++ a/mm/damon/ops-common.c @@ -16,21 +16,25 @@ * Get an online page for a pfn if it's in the LRU list. Otherwise, returns * NULL. * - * The body of this function is stolen from the 'page_idle_get_page()'. We + * The body of this function is stolen from the 'page_idle_get_folio()'. We * steal rather than reuse it because the code is quite simple. */ -struct page *damon_get_page(unsigned long pfn) +struct folio *damon_get_folio(unsigned long pfn) { struct page *page = pfn_to_online_page(pfn); + struct folio *folio; - if (!page || !PageLRU(page) || !get_page_unless_zero(page)) + if (!page || PageTail(page)) return NULL; - if (unlikely(!PageLRU(page))) { - put_page(page); - page = NULL; + folio = page_folio(page); + if (!folio_test_lru(folio) || !folio_try_get(folio)) + return NULL; + if (unlikely(page_folio(page) != folio || !folio_test_lru(folio))) { + folio_put(folio); + folio = NULL; } - return page; + return folio; } void damon_ptep_mkold(pte_t *pte, struct mm_struct *mm, unsigned long addr) --- a/mm/damon/ops-common.h~mm-damon-introduce-damon_get_folio +++ a/mm/damon/ops-common.h @@ -7,7 +7,14 @@ #include <linux/damon.h> -struct page *damon_get_page(unsigned long pfn); +struct folio *damon_get_folio(unsigned long pfn); +static inline struct page *damon_get_page(unsigned long pfn) +{ + struct folio *folio = damon_get_folio(pfn); + + /* when folio is NULL, return &(0->page) mean return NULL */ + return &folio->page; +} void damon_ptep_mkold(pte_t *pte, struct mm_struct *mm, unsigned long addr); void damon_pmdp_mkold(pmd_t *pmd, struct mm_struct *mm, unsigned long addr); _ Patches currently in -mm which might be from wangkefeng.wang@xxxxxxxxxx are mm-hwposion-support-recovery-from-ksm_might_need_to_copy.patch mm-hwposion-support-recovery-from-ksm_might_need_to_copy-v3.patch mm-madvise-use-vm_normal_folio-in-madvise_free_pte_range.patch