Excerpts from Ding Tianhong's message of January 4, 2021 10:33 pm: > On 2020/12/5 14:57, Nicholas Piggin wrote: >> This changes the awkward approach where architectures provide init >> functions to determine which levels they can provide large mappings for, >> to one where the arch is queried for each call. >> >> This removes code and indirection, and allows constant-folding of dead >> code for unsupported levels. >> >> This also adds a prot argument to the arch query. This is unused >> currently but could help with some architectures (e.g., some powerpc >> processors can't map uncacheable memory with large pages). >> >> Cc: linuxppc-dev@xxxxxxxxxxxxxxxx >> Cc: Catalin Marinas <catalin.marinas@xxxxxxx> >> Cc: Will Deacon <will@xxxxxxxxxx> >> Cc: linux-arm-kernel@xxxxxxxxxxxxxxxxxxx >> Cc: Thomas Gleixner <tglx@xxxxxxxxxxxxx> >> Cc: Ingo Molnar <mingo@xxxxxxxxxx> >> Cc: Borislav Petkov <bp@xxxxxxxxx> >> Cc: x86@xxxxxxxxxx >> Cc: "H. Peter Anvin" <hpa@xxxxxxxxx> >> Acked-by: Catalin Marinas <catalin.marinas@xxxxxxx> [arm64] >> Signed-off-by: Nicholas Piggin <npiggin@xxxxxxxxx> >> --- >> arch/arm64/include/asm/vmalloc.h | 8 +++ >> arch/arm64/mm/mmu.c | 10 +-- >> arch/powerpc/include/asm/vmalloc.h | 8 +++ >> arch/powerpc/mm/book3s64/radix_pgtable.c | 8 +-- >> arch/x86/include/asm/vmalloc.h | 7 ++ >> arch/x86/mm/ioremap.c | 10 +-- >> include/linux/io.h | 9 --- >> include/linux/vmalloc.h | 6 ++ >> init/main.c | 1 - >> mm/ioremap.c | 88 +++++++++--------------- >> 10 files changed, 77 insertions(+), 78 deletions(-) >> >> diff --git a/arch/arm64/include/asm/vmalloc.h b/arch/arm64/include/asm/vmalloc.h >> index 2ca708ab9b20..597b40405319 100644 >> --- a/arch/arm64/include/asm/vmalloc.h >> +++ b/arch/arm64/include/asm/vmalloc.h >> @@ -1,4 +1,12 @@ >> #ifndef _ASM_ARM64_VMALLOC_H >> #define _ASM_ARM64_VMALLOC_H >> >> +#include <asm/page.h> >> + >> +#ifdef CONFIG_HAVE_ARCH_HUGE_VMAP >> +bool arch_vmap_p4d_supported(pgprot_t prot); >> +bool arch_vmap_pud_supported(pgprot_t prot); >> +bool arch_vmap_pmd_supported(pgprot_t prot); >> +#endif >> + >> #endif /* _ASM_ARM64_VMALLOC_H */ >> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c >> index ca692a815731..1b60079c1cef 100644 >> --- a/arch/arm64/mm/mmu.c >> +++ b/arch/arm64/mm/mmu.c >> @@ -1315,12 +1315,12 @@ void *__init fixmap_remap_fdt(phys_addr_t dt_phys, int *size, pgprot_t prot) >> return dt_virt; >> } >> >> -int __init arch_ioremap_p4d_supported(void) >> +bool arch_vmap_p4d_supported(pgprot_t prot) >> { >> - return 0; >> + return false; >> } >> > > I think you should put this function in the CONFIG_HAVE_ARCH_HUGE_VMAP, otherwise it may break the compile when disable the CONFIG_HAVE_ARCH_HUGE_VMAP, the same > as the x86 and ppc. Ah, good catch. arm64 is okay because it always selects HAVE_ARCH_HUGE_VMAP, powerpc is okay because it places them in a file that's only compiled for configs that select huge vmap, but x86-32 without PAE build breaks. I'll fix that. Thanks, Nick