+ mm-pagewalk-allow-walk_page_range_novma-without-mm.patch added to mm-unstable branch

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



The patch titled
     Subject: mm: pagewalk: allow walk_page_range_novma() without mm
has been added to the -mm mm-unstable branch.  Its filename is
     mm-pagewalk-allow-walk_page_range_novma-without-mm.patch

This patch will shortly appear at
     https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patches/mm-pagewalk-allow-walk_page_range_novma-without-mm.patch

This patch will later appear in the mm-unstable branch at
    git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm

Before you just go and hit "reply", please:
   a) Consider who else should be cc'ed
   b) Prefer to cc a suitable mailing list as well
   c) Ideally: find the original patch on the mailing list and do a
      reply-to-all to that, adding suitable additional cc's

*** Remember to use Documentation/process/submit-checklist.rst when testing your code ***

The -mm tree is included into linux-next via the mm-everything
branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
and is updated there every 2-3 working days

------------------------------------------------------
From: Rolf Eike Beer <eb@xxxxxxxxx>
Subject: mm: pagewalk: allow walk_page_range_novma() without mm
Date: Mon, 22 Aug 2022 15:03:29 +0200

Since e47690d756a7 ("x86: mm: avoid allocating struct mm_struct on the
stack") a pgd can be passed to walk_page_range_novma().  In case it is set
no place in the pagewalk code use the walk.mm anymore, so permit to pass a
NULL mm instead.  It is up to the caller to ensure proper locking on the
pgd in this case.

Link: https://lkml.kernel.org/r/5760214.MhkbZ0Pkbq@devpool047
Signed-off-by: Rolf Eike Beer <eb@xxxxxxxxx>
Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>
---

 mm/pagewalk.c |    7 +++++--
 1 file changed, 5 insertions(+), 2 deletions(-)

--- a/mm/pagewalk.c~mm-pagewalk-allow-walk_page_range_novma-without-mm
+++ a/mm/pagewalk.c
@@ -506,6 +506,8 @@ int walk_page_range(struct mm_struct *mm
  * not backed by VMAs. Because 'unusual' entries may be walked this function
  * will also not lock the PTEs for the pte_entry() callback. This is useful for
  * walking the kernel pages tables or page tables for firmware.
+ *
+ * Either mm or pgd may be NULL, but not both.
  */
 int walk_page_range_novma(struct mm_struct *mm, unsigned long start,
 			  unsigned long end, const struct mm_walk_ops *ops,
@@ -520,10 +522,11 @@ int walk_page_range_novma(struct mm_stru
 		.no_vma		= true
 	};
 
-	if (start >= end || !walk.mm)
+	if (start >= end || (!walk.mm && !walk.pgd))
 		return -EINVAL;
 
-	mmap_assert_locked(walk.mm);
+	if (walk.mm)
+		mmap_assert_locked(walk.mm);
 
 	return walk_pgd_range(start, end, &walk);
 }
_

Patches currently in -mm which might be from eb@xxxxxxxxx are

mm-pagewalk-make-error-checks-more-obvious.patch
mm-pagewalk-dont-check-vma-in-walk_page_range_novma.patch
mm-pagewalk-fix-documentation-of-pte-hole-handling.patch
mm-pagewalk-add-api-documentation-for-walk_page_range_novma.patch
mm-pagewalk-allow-walk_page_range_novma-without-mm.patch
mm-pagewalk-move-variables-to-more-local-scope-tweak-loops.patch




[Index of Archives]     [Kernel Archive]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]

  Powered by Linux