The patch titled Subject: mm: pagewalk: add test_p?d callbacks has been removed from the -mm tree. Its filename was mm-pagewalk-add-test_pd-callbacks.patch This patch was dropped because an updated version will be merged ------------------------------------------------------ From: Steven Price <steven.price@xxxxxxx> Subject: mm: pagewalk: add test_p?d callbacks It is useful to be able to skip parts of the page table tree even when walking without VMAs. Add test_p?d callbacks similar to test_walk but which are called just before a table at that level is walked. If the callback returns non-zero then the entire table is skipped. Link: http://lkml.kernel.org/r/20191028135910.33253-14-steven.price@xxxxxxx Signed-off-by: Steven Price <steven.price@xxxxxxx> Tested-by: Zong Li <zong.li@xxxxxxxxxx> Cc: Albert Ou <aou@xxxxxxxxxxxxxxxxx> Cc: Alexander Potapenko <glider@xxxxxxxxxx> Cc: Alexandre Ghiti <alex@xxxxxxxx> Cc: Andrey Ryabinin <aryabinin@xxxxxxxxxxxxx> Cc: Andy Lutomirski <luto@xxxxxxxxxx> Cc: Ard Biesheuvel <ard.biesheuvel@xxxxxxxxxx> Cc: Arnd Bergmann <arnd@xxxxxxxx> Cc: Benjamin Herrenschmidt <benh@xxxxxxxxxxxxxxxxxxx> Cc: Borislav Petkov <bp@xxxxxxxxx> Cc: Catalin Marinas <catalin.marinas@xxxxxxx> Cc: Christian Borntraeger <borntraeger@xxxxxxxxxx> Cc: Dave Hansen <dave.hansen@xxxxxxxxxxxxxxx> Cc: Dave Jiang <dave.jiang@xxxxxxxxx> Cc: David S. Miller <davem@xxxxxxxxxxxxx> Cc: Dmitry Vyukov <dvyukov@xxxxxxxxxx> Cc: Heiko Carstens <heiko.carstens@xxxxxxxxxx> Cc: "H. Peter Anvin" <hpa@xxxxxxxxx> Cc: Ingo Molnar <mingo@xxxxxxx> Cc: James Hogan <jhogan@xxxxxxxxxx> Cc: James Morse <james.morse@xxxxxxx> Cc: "Liang, Kan" <kan.liang@xxxxxxxxxxxxxxx> Cc: Mark Rutland <mark.rutland@xxxxxxx> Cc: Matthew Wilcox <mawilcox@xxxxxxxxxxxxx> Cc: Michael Ellerman <mpe@xxxxxxxxxxxxxx> Cc: Naoya Horiguchi <n-horiguchi@xxxxxxxxxxxxx> Cc: Palmer Dabbelt <palmer@xxxxxxxxxx> Cc: Paul Burton <paul.burton@xxxxxxxx> Cc: Paul Mackerras <paulus@xxxxxxxxx> Cc: Paul Walmsley <paul.walmsley@xxxxxxxxxx> Cc: Peter Zijlstra <peterz@xxxxxxxxxxxxx> Cc: Ralf Baechle <ralf@xxxxxxxxxxxxxx> Cc: Russell King <linux@xxxxxxxxxxxxxxx> Cc: Shiraz Hashim <shashim@xxxxxxxxxxxxxx> Cc: Thomas Gleixner <tglx@xxxxxxxxxxxxx> Cc: Vasily Gorbik <gor@xxxxxxxxxxxxx> Cc: Vineet Gupta <vgupta@xxxxxxxxxxxx> Cc: Will Deacon <will@xxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- include/linux/pagewalk.h | 11 +++++++++++ mm/pagewalk.c | 24 ++++++++++++++++++++++++ 2 files changed, 35 insertions(+) --- a/include/linux/pagewalk.h~mm-pagewalk-add-test_pd-callbacks +++ a/include/linux/pagewalk.h @@ -24,6 +24,11 @@ struct mm_walk; * "do page table walk over the current vma", returning * a negative value means "abort current page table walk * right now" and returning 1 means "skip the current vma" + * @test_pmd: similar to test_walk(), but called for every pmd. + * @test_pud: similar to test_walk(), but called for every pud. + * @test_p4d: similar to test_walk(), but called for every p4d. + * Returning 0 means walk this part of the page tables, + * returning 1 means to skip this range. * @pre_vma: if set, called before starting walk on a non-null vma. * @post_vma: if set, called after a walk on a non-null vma, provided * that @pre_vma and the vma walk succeeded. @@ -47,6 +52,12 @@ struct mm_walk_ops { int (*hugetlb_entry)(pte_t *pte, unsigned long hmask, unsigned long addr, unsigned long next, struct mm_walk *walk); + int (*test_pmd)(unsigned long addr, unsigned long next, + pmd_t *pmd_start, struct mm_walk *walk); + int (*test_pud)(unsigned long addr, unsigned long next, + pud_t *pud_start, struct mm_walk *walk); + int (*test_p4d)(unsigned long addr, unsigned long next, + p4d_t *p4d_start, struct mm_walk *walk); int (*test_walk)(unsigned long addr, unsigned long next, struct mm_walk *walk); int (*pre_vma)(unsigned long start, unsigned long end, --- a/mm/pagewalk.c~mm-pagewalk-add-test_pd-callbacks +++ a/mm/pagewalk.c @@ -35,6 +35,14 @@ static int walk_pmd_range(pud_t *pud, un const struct mm_walk_ops *ops = walk->ops; int err = 0; + if (ops->test_pmd) { + err = ops->test_pmd(addr, end, pmd_offset(pud, 0UL), walk); + if (err < 0) + return err; + if (err > 0) + return 0; + } + pmd = pmd_offset(pud, addr); do { again: @@ -86,6 +94,14 @@ static int walk_pud_range(p4d_t *p4d, un const struct mm_walk_ops *ops = walk->ops; int err = 0; + if (ops->test_pud) { + err = ops->test_pud(addr, end, pud_offset(p4d, 0UL), walk); + if (err < 0) + return err; + if (err > 0) + return 0; + } + pud = pud_offset(p4d, addr); do { again: @@ -129,6 +145,14 @@ static int walk_p4d_range(pgd_t *pgd, un const struct mm_walk_ops *ops = walk->ops; int err = 0; + if (ops->test_p4d) { + err = ops->test_p4d(addr, end, p4d_offset(pgd, 0UL), walk); + if (err < 0) + return err; + if (err > 0) + return 0; + } + p4d = p4d_offset(pgd, addr); do { next = p4d_addr_end(addr, end); _ Patches currently in -mm which might be from steven.price@xxxxxxx are mm-pagewalk-add-depth-parameter-to-pte_hole.patch x86-mm-point-to-struct-seq_file-from-struct-pg_state.patch x86-mmefi-convert-ptdump_walk_pgd_level-to-take-a-mm_struct.patch x86-mm-convert-ptdump_walk_pgd_level_debugfs-to-take-an-mm_struct.patch x86-mm-convert-ptdump_walk_pgd_level_core-to-take-an-mm_struct.patch mm-add-generic-ptdump.patch x86-mm-convert-dump_pagetables-to-use-walk_page_range.patch arm64-mm-convert-mm-dumpc-to-use-walk_page_range.patch arm64-mm-display-non-present-entries-in-ptdump.patch mm-ptdump-reduce-level-numbers-by-1-in-note_page.patch