Not tested on x86 or arm64, would appreciate a quick test there so I can ask Andrew to put it in -mm. Other option is I can disable huge vmallocs for them for the time being. Since v2: - Rebased on vmalloc cleanups, split series into simpler pieces. - Fixed several compile errors and warnings - Keep the page array and accounting in small page units because struct vm_struct is an interface (this should fix x86 vmap stack debug assert). [Thanks Zefan] Nicholas Piggin (8): mm/vmalloc: fix vmalloc_to_page for huge vmap mappings mm: apply_to_pte_range warn and fail if a large pte is encountered mm/vmalloc: rename vmap_*_range vmap_pages_*_range lib/ioremap: rename ioremap_*_range to vmap_*_range mm: HUGE_VMAP arch support cleanup mm: Move vmap_range from lib/ioremap.c to mm/vmalloc.c mm/vmalloc: add vmap_range_noflush variant mm/vmalloc: Hugepage vmalloc mappings .../admin-guide/kernel-parameters.txt | 2 + arch/arm64/mm/mmu.c | 10 +- arch/powerpc/mm/book3s64/radix_pgtable.c | 8 +- arch/x86/mm/ioremap.c | 10 +- include/linux/io.h | 9 - include/linux/vmalloc.h | 13 + init/main.c | 1 - mm/ioremap.c | 231 +-------- mm/memory.c | 60 ++- mm/vmalloc.c | 442 +++++++++++++++--- 10 files changed, 453 insertions(+), 333 deletions(-) -- 2.23.0