The patch titled Subject: mm/debug_vm_pgtable: drop protection_map[] usage has been added to the -mm tree. Its filename is mm-debug_vm_pgtable-drop-protection_map-usage.patch This patch should soon appear at https://ozlabs.org/~akpm/mmots/broken-out/mm-debug_vm_pgtable-drop-protection_map-usage.patch and later at https://ozlabs.org/~akpm/mmotm/broken-out/mm-debug_vm_pgtable-drop-protection_map-usage.patch Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next and is updated there every 3-4 working days ------------------------------------------------------ From: Anshuman Khandual <anshuman.khandual@xxxxxxx> Subject: mm/debug_vm_pgtable: drop protection_map[] usage Patch series "mm/mmap: Drop protection_map[] and platform's __SXXX/__PXXX requirements", v2. protection_map[] is an array based construct that translates given vm_flags combination. This array contains page protection map, which is populated by the platform via [__S000 .. __S111] and [__P000 .. __P111] exported macros. Primary usage for protection_map[] is for vm_get_page_prot(), which is used to determine page protection value for a given vm_flags. vm_get_page_prot() implementation, could again call platform overrides arch_vm_get_page_prot() and arch_filter_pgprot(). Some platforms override protection_map[] that was originally built with __SXXX/__PXXX with different runtime values. Currently there are multiple layers of abstraction i.e __SXXX/__PXXX macros , protection_map[], arch_vm_get_page_prot() and arch_filter_pgprot() built between the platform and generic MM, finally defining vm_get_page_prot(). Hence this series proposes to drop all these abstraction levels and instead just move the responsibility of defining vm_get_page_prot() to the platform itself making it clean and simple. This first introduces ARCH_HAS_VM_GET_PAGE_PROT which enables the platforms to define custom vm_get_page_prot(). This starts converting platforms that either change protection_map[] or define the overrides arch_filter_pgprot() or arch_vm_get_page_prot() which enables for those constructs to be dropped off completely. This series then converts remaining platforms which enables for __SXXX/__PXXX constructs to be dropped off completely. Finally it drops the generic vm_get_page_prot() and then ARCH_HAS_VM_GET_PAGE_PROT as every platform now defines their own vm_get_page_prot(). The series has been inspired from an earlier discuss with Christoph Hellwig https://lore.kernel.org/all/1632712920-8171-1-git-send-email-anshuman.khandual@xxxxxxx/ This series has been cross built for multiple platforms. This patch (of 30): Although protection_map[] contains the platform defined page protection map for a given vm_flags combination, vm_get_page_prot() is the right interface to use. This will also reduce dependency on protection_map[] which is going to be dropped off completely later on. Link: https://lkml.kernel.org/r/1645425519-9034-1-git-send-email-anshuman.khandual@xxxxxxx Link: https://lkml.kernel.org/r/1645425519-9034-2-git-send-email-anshuman.khandual@xxxxxxx Signed-off-by: Anshuman Khandual <anshuman.khandual@xxxxxxx> Cc: Arnd Bergmann <arnd@xxxxxxxx> Cc: Brian Cain <bcain@xxxxxxxxxxxxxx> Cc: Catalin Marinas <catalin.marinas@xxxxxxx> Cc: Christoph Hellwig <hch@xxxxxx> Cc: Chris Zankel <chris@xxxxxxxxxx> Cc: David S. Miller <davem@xxxxxxxxxxxxx> Cc: Dinh Nguyen <dinguyen@xxxxxxxxxx> Cc: Geert Uytterhoeven <geert@xxxxxxxxxxxxxx> Cc: Guo Ren <guoren@xxxxxxxxxx> Cc: Heiko Carstens <hca@xxxxxxxxxxxxx> Cc: Ingo Molnar <mingo@xxxxxxxxxx> Cc: "James E.J. Bottomley" <James.Bottomley@xxxxxxxxxxxxxxxxxxxxx> Cc: Jeff Dike <jdike@xxxxxxxxxxx> Cc: Jonas Bonn <jonas@xxxxxxxxxxxx> Cc: Khalid Aziz <khalid.aziz@xxxxxxxxxx> Cc: Michael Ellerman <mpe@xxxxxxxxxxxxxx> Cc: Michal Simek <monstr@xxxxxxxxx> Cc: Nick Hu <nickhu@xxxxxxxxxxxxx> Cc: Palmer Dabbelt <palmer@xxxxxxxxxxx> Cc: Paul Mackerras <paulus@xxxxxxxxx> Cc: Paul Walmsley <paul.walmsley@xxxxxxxxxx> Cc: Richard Henderson <rth@xxxxxxxxxxx> Cc: Rich Felker <dalias@xxxxxxxx> Cc: Russell King <linux@xxxxxxxxxxxxxxx> Cc: Stafford Horne <shorne@xxxxxxxxx> Cc: Sven Schnelle <svens@xxxxxxxxxxxxx> Cc: Thomas Bogendoerfer <tsbogend@xxxxxxxxxxxxxxxx> Cc: Thomas Gleixner <tglx@xxxxxxxxxxxxx> Cc: Vasily Gorbik <gor@xxxxxxxxxxxxx> Cc: Vineet Gupta <vgupta@xxxxxxxxxx> Cc: Will Deacon <will@xxxxxxxxxx> Cc: Yoshinori Sato <ysato@xxxxxxxxxxxxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/debug_vm_pgtable.c | 31 +++++++++++++++++++------------ 1 file changed, 19 insertions(+), 12 deletions(-) --- a/mm/debug_vm_pgtable.c~mm-debug_vm_pgtable-drop-protection_map-usage +++ a/mm/debug_vm_pgtable.c @@ -93,7 +93,7 @@ struct pgtable_debug_args { static void __init pte_basic_tests(struct pgtable_debug_args *args, int idx) { - pgprot_t prot = protection_map[idx]; + pgprot_t prot = vm_get_page_prot(idx); pte_t pte = pfn_pte(args->fixed_pte_pfn, prot); unsigned long val = idx, *ptr = &val; @@ -101,7 +101,7 @@ static void __init pte_basic_tests(struc /* * This test needs to be executed after the given page table entry - * is created with pfn_pte() to make sure that protection_map[idx] + * is created with pfn_pte() to make sure that vm_get_page_prot(idx) * does not have the dirty bit enabled from the beginning. This is * important for platforms like arm64 where (!PTE_RDONLY) indicate * dirty bit being set. @@ -190,7 +190,7 @@ static void __init pte_savedwrite_tests( #ifdef CONFIG_TRANSPARENT_HUGEPAGE static void __init pmd_basic_tests(struct pgtable_debug_args *args, int idx) { - pgprot_t prot = protection_map[idx]; + pgprot_t prot = vm_get_page_prot(idx); unsigned long val = idx, *ptr = &val; pmd_t pmd; @@ -202,7 +202,7 @@ static void __init pmd_basic_tests(struc /* * This test needs to be executed after the given page table entry - * is created with pfn_pmd() to make sure that protection_map[idx] + * is created with pfn_pmd() to make sure that vm_get_page_prot(idx) * does not have the dirty bit enabled from the beginning. This is * important for platforms like arm64 where (!PTE_RDONLY) indicate * dirty bit being set. @@ -325,7 +325,7 @@ static void __init pmd_savedwrite_tests( #ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD static void __init pud_basic_tests(struct pgtable_debug_args *args, int idx) { - pgprot_t prot = protection_map[idx]; + pgprot_t prot = vm_get_page_prot(idx); unsigned long val = idx, *ptr = &val; pud_t pud; @@ -337,7 +337,7 @@ static void __init pud_basic_tests(struc /* * This test needs to be executed after the given page table entry - * is created with pfn_pud() to make sure that protection_map[idx] + * is created with pfn_pud() to make sure that vm_get_page_prot(idx) * does not have the dirty bit enabled from the beginning. This is * important for platforms like arm64 where (!PTE_RDONLY) indicate * dirty bit being set. @@ -1106,14 +1106,14 @@ static int __init init_args(struct pgtab /* * Initialize the debugging data. * - * protection_map[0] (or even protection_map[8]) will help create - * page table entries with PROT_NONE permission as required for - * pxx_protnone_tests(). + * vm_get_page_prot(VM_NONE) or vm_get_page_prot(VM_SHARED|VM_NONE) + * will help create page table entries with PROT_NONE permission as + * required for pxx_protnone_tests(). */ memset(args, 0, sizeof(*args)); args->vaddr = get_random_vaddr(); args->page_prot = vm_get_page_prot(VMFLAGS); - args->page_prot_none = protection_map[0]; + args->page_prot_none = vm_get_page_prot(VM_NONE); args->is_contiguous_page = false; args->pud_pfn = ULONG_MAX; args->pmd_pfn = ULONG_MAX; @@ -1248,12 +1248,19 @@ static int __init debug_vm_pgtable(void) return ret; /* - * Iterate over the protection_map[] to make sure that all + * Iterate over each possible vm_flags to make sure that all * the basic page table transformation validations just hold * true irrespective of the starting protection value for a * given page table entry. + * + * Protection based vm_flags combinatins are always linear + * and increasing i.e starting from VM_NONE and going upto + * (VM_SHARED | READ | WRITE | EXEC). */ - for (idx = 0; idx < ARRAY_SIZE(protection_map); idx++) { +#define VM_FLAGS_START (VM_NONE) +#define VM_FLAGS_END (VM_SHARED | VM_EXEC | VM_WRITE | VM_READ) + + for (idx = VM_FLAGS_START; idx <= VM_FLAGS_END; idx++) { pte_basic_tests(&args, idx); pmd_basic_tests(&args, idx); pud_basic_tests(&args, idx); _ Patches currently in -mm which might be from anshuman.khandual@xxxxxxx are mm-generalize-arch_has_filter_pgprot.patch mm-merge-pte_mkhuge-call-into-arch_make_huge_pte.patch mm-debug_vm_pgtable-drop-protection_map-usage.patch mm-mmap-clarify-protection_map-indices.patch mm-mmap-add-new-config-arch_has_vm_get_page_prot.patch powerpc-mm-enable-arch_has_vm_get_page_prot.patch arm64-mm-enable-arch_has_vm_get_page_prot.patch sparc-mm-enable-arch_has_vm_get_page_prot.patch mips-mm-enable-arch_has_vm_get_page_prot.patch m68k-mm-enable-arch_has_vm_get_page_prot.patch arm-mm-enable-arch_has_vm_get_page_prot.patch mm-mmap-drop-protection_map.patch mm-mmap-drop-arch_filter_pgprot.patch mm-mmap-drop-arch_vm_get_page_pgprot.patch s390-mm-enable-arch_has_vm_get_page_prot.patch riscv-mm-enable-arch_has_vm_get_page_prot.patch alpha-mm-enable-arch_has_vm_get_page_prot.patch sh-mm-enable-arch_has_vm_get_page_prot.patch arc-mm-enable-arch_has_vm_get_page_prot.patch csky-mm-enable-arch_has_vm_get_page_prot.patch extensa-mm-enable-arch_has_vm_get_page_prot.patch parisc-mm-enable-arch_has_vm_get_page_prot.patch openrisc-mm-enable-arch_has_vm_get_page_prot.patch um-mm-enable-arch_has_vm_get_page_prot.patch microblaze-mm-enable-arch_has_vm_get_page_prot.patch nios2-mm-enable-arch_has_vm_get_page_prot.patch hexagon-mm-enable-arch_has_vm_get_page_prot.patch nds32-mm-enable-arch_has_vm_get_page_prot.patch ia64-mm-enable-arch_has_vm_get_page_prot.patch mm-mmap-drop-generic-vm_get_page_prot.patch mm-mmap-drop-arch_has_vm_get_page_prot.patch mm-hugetlb-generalize-arch_want_general_hugetlb.patch mm-migration-add-trace-events-for-thp-migrations.patch mm-migration-add-trace-events-for-base-page-and-hugetlb-migrations.patch