From: Mike Rapoport <mike.rapoport@xxxxxxxxx> Hi, There were several rounds of discussion how to remap with base pages only the crash kernel area, the latest one here: https://lore.kernel.org/all/1656777473-73887-1-git-send-email-guanghuifeng@xxxxxxxxxxxxxxxxx and this is my attempt to allow having both large pages in the linear map and protection for the crash kernel memory. For server systems it is important to protect crash kernel memory for post-mortem analysis, and for that protection to work the crash kernel memory should be mapped with base pages in the linear map. On the systems with ZONE_DMA/DMA32 enabled, crash kernel reservation happens after the linear map is created and the current code forces using base pages for the entire linear map, which results in performance degradation. These patches enable remapping of the crash kernel area with base pages while keeping large pages in the rest of the linear map. The idea is to align crash kernel reservation to PUD boundaries, remap that PUD and then free the extra memory. For now the remapping does not deal with the case when crash kernel base is specified, but this won't be a problem to add if the idea is generally acceptable. RFC: https://lore.kernel.org/all/20220801080418.120311-1-rppt@xxxxxxxxxx Mike Rapoport (5): arm64: rename defer_reserve_crashkernel() to have_zone_dma() arm64/mmu: drop _hotplug from unmap_hotplug_* function names arm64/mmu: move helpers for hotplug page tables freeing close to callers arm64/mm: remap crash kernel with base pages even if rodata_full disabled arm64/mmu: simplify logic around crash kernel mapping in map_mem() arch/arm64/include/asm/memory.h | 2 +- arch/arm64/include/asm/mmu.h | 3 + arch/arm64/kernel/machine_kexec.c | 6 ++ arch/arm64/mm/init.c | 69 +++++++++++--- arch/arm64/mm/mmu.c | 152 ++++++++++++++++-------------- 5 files changed, 147 insertions(+), 85 deletions(-) base-commit: 568035b01cfb107af8d2e4bd2fb9aea22cf5b868 -- 2.35.3