The patch titled Subject: mm/pagewalk: add walk_page_range_vma() has been added to the -mm mm-unstable branch. Its filename is mm-pagewalk-add-walk_page_range_vma.patch This patch will shortly appear at https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patches/mm-pagewalk-add-walk_page_range_vma.patch This patch will later appear in the mm-unstable branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next via the mm-everything branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm and is updated there every 2-3 working days ------------------------------------------------------ From: David Hildenbrand <david@xxxxxxxxxx> Subject: mm/pagewalk: add walk_page_range_vma() Date: Fri, 30 Sep 2022 16:19:29 +0200 Let's add walk_page_range_vma(), which is similar to walk_page_vma(), however, is only interested in a subset of the VMA range. To be used in KSM code to stop using follow_page() next. Link: https://lkml.kernel.org/r/20220930141931.174362-6-david@xxxxxxxxxx Signed-off-by: David Hildenbrand <david@xxxxxxxxxx> Cc: Andrea Arcangeli <aarcange@xxxxxxxxxx> Cc: Hugh Dickins <hughd@xxxxxxxxxx> Cc: Jason Gunthorpe <jgg@xxxxxxxxxx> Cc: John Hubbard <jhubbard@xxxxxxxxxx> Cc: Matthew Wilcox <willy@xxxxxxxxxxxxx> Cc: Peter Xu <peterx@xxxxxxxxxx> Cc: Shuah Khan <shuah@xxxxxxxxxx> Cc: Vlastimil Babka <vbabka@xxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- include/linux/pagewalk.h | 3 +++ mm/pagewalk.c | 27 +++++++++++++++++++++++++++ 2 files changed, 30 insertions(+) --- a/include/linux/pagewalk.h~mm-pagewalk-add-walk_page_range_vma +++ a/include/linux/pagewalk.h @@ -99,6 +99,9 @@ int walk_page_range_novma(struct mm_stru unsigned long end, const struct mm_walk_ops *ops, pgd_t *pgd, void *private); +int walk_page_range_vma(struct vm_area_struct *vma, unsigned long start, + unsigned long end, const struct mm_walk_ops *ops, + void *private); int walk_page_vma(struct vm_area_struct *vma, const struct mm_walk_ops *ops, void *private); int walk_page_mapping(struct address_space *mapping, pgoff_t first_index, --- a/mm/pagewalk.c~mm-pagewalk-add-walk_page_range_vma +++ a/mm/pagewalk.c @@ -517,6 +517,33 @@ int walk_page_range_novma(struct mm_stru return walk_pgd_range(start, end, &walk); } +int walk_page_range_vma(struct vm_area_struct *vma, unsigned long start, + unsigned long end, const struct mm_walk_ops *ops, + void *private) +{ + struct mm_walk walk = { + .ops = ops, + .mm = vma->vm_mm, + .vma = vma, + .private = private, + }; + int err; + + if (start >= end || !walk.mm) + return -EINVAL; + if (start < vma->vm_start || end > vma->vm_end) + return -EINVAL; + + mmap_assert_locked(walk.mm); + + err = walk_page_test(start, end, &walk); + if (err > 0) + return 0; + if (err < 0) + return err; + return __walk_page_range(start, end, &walk); +} + int walk_page_vma(struct vm_area_struct *vma, const struct mm_walk_ops *ops, void *private) { _ Patches currently in -mm which might be from david@xxxxxxxxxx are selftests-vm-add-test-to-measure-madv_unmergeable-performance.patch mm-ksm-simplify-break_ksm-to-not-rely-on-vm_fault_write.patch mm-remove-vm_fault_write.patch mm-ksm-fix-ksm-cow-breaking-with-userfaultfd-wp-via-fault_flag_unshare.patch mm-pagewalk-add-walk_page_range_vma.patch mm-ksm-convert-break_ksm-to-use-walk_page_range_vma.patch mm-gup-remove-foll_migration.patch