The patch titled Subject: fs/proc/task_mmu: use folio API in pte_is_pinned() has been added to the -mm mm-unstable branch. Its filename is fs-proc-task_mmu-use-folio-api-in-pte_is_pinned.patch This patch will shortly appear at https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patches/fs-proc-task_mmu-use-folio-api-in-pte_is_pinned.patch This patch will later appear in the mm-unstable branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next via the mm-everything branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm and is updated there every 2-3 working days ------------------------------------------------------ From: Kefeng Wang <wangkefeng.wang@xxxxxxxxxx> Subject: fs/proc/task_mmu: use folio API in pte_is_pinned() Date: Tue, 4 Jun 2024 19:48:19 +0800 Patch series "mm: remove page_maybe_dma_pinned() and page_mkclean()". Most page_maybe_dma_pinned() and page_mkclean() callers have been converted to the folio equivalents, after two more convertsions, remove them and update the comment and documention. This patch (of 4): Convert to use vm_normal_folio() and folio_maybe_dma_pinned() API, which helps to remove page_maybe_dma_pinned() in the subsequent change. Link: https://lkml.kernel.org/r/20240604114822.2089819-1-wangkefeng.wang@xxxxxxxxxx Link: https://lkml.kernel.org/r/20240604114822.2089819-2-wangkefeng.wang@xxxxxxxxxx Signed-off-by: Kefeng Wang <wangkefeng.wang@xxxxxxxxxx> Cc: Daniel Vetter <daniel@xxxxxxxx> Cc: Helge Deller <deller@xxxxxx> Cc: Jonathan Corbet <corbet@xxxxxxx> Cc: Matthew Wilcox (Oracle) <willy@xxxxxxxxxxxxx> Cc: David Hildenbrand <david@xxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- fs/proc/task_mmu.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) --- a/fs/proc/task_mmu.c~fs-proc-task_mmu-use-folio-api-in-pte_is_pinned +++ a/fs/proc/task_mmu.c @@ -1088,7 +1088,7 @@ struct clear_refs_private { static inline bool pte_is_pinned(struct vm_area_struct *vma, unsigned long addr, pte_t pte) { - struct page *page; + struct folio *folio; if (!pte_write(pte)) return false; @@ -1096,10 +1096,10 @@ static inline bool pte_is_pinned(struct return false; if (likely(!test_bit(MMF_HAS_PINNED, &vma->vm_mm->flags))) return false; - page = vm_normal_page(vma, addr, pte); - if (!page) + folio = vm_normal_folio(vma, addr, pte); + if (!folio) return false; - return page_maybe_dma_pinned(page); + return folio_maybe_dma_pinned(folio); } static inline void clear_soft_dirty(struct vm_area_struct *vma, _ Patches currently in -mm which might be from wangkefeng.wang@xxxxxxxxxx are mm-add-folio_alloc_mpol.patch mm-mempolicy-use-folio_alloc_mpol_noprof-in-vma_alloc_folio_noprof.patch mm-mempolicy-use-folio_alloc_mpol-in-alloc_migration_target_by_mpol.patch mm-shmem-use-folio_alloc_mpol-in-shmem_alloc_folio.patch mm-refactor-folio_undo_large_rmappable.patch mm-memcontrol-remove-page_memcg.patch rmap-remove-define_page_vma_walk.patch mm-migrate-simplify-__buffer_migrate_folio.patch mm-migrate_device-use-a-newfolio-in-__migrate_device_pages.patch mm-migrate_device-unify-migrate-folio-for-migrate_sync_no_copy.patch mm-migrate-remove-migrate_folio_extra.patch mm-remove-migrate_sync_no_copy-mode.patch fs-proc-task_mmu-use-folio-api-in-pte_is_pinned.patch mm-remove-page_maybe_dma_pinned.patch fb_defio-use-a-folio-in-fb_deferred_io_work.patch mm-remove-page_mkclean.patch