In order to isolate the kernel text mapping, we used some sort of hack to isolate the kernel text range which consisted in marking this region as not mappable with memblock_mark_nomap. Simply use the newly introduced memblock_isolate_memory function which does exactly the same but does not uselessly mark the region as not mappable. Signed-off-by: Alexandre Ghiti <alexghiti@xxxxxxxxxxxx> --- arch/arm64/mm/mmu.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index 6f9d8898a025..408dc852805c 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -552,7 +552,7 @@ static void __init map_mem(pgd_t *pgdp) * So temporarily mark them as NOMAP to skip mappings in * the following for-loop */ - memblock_mark_nomap(kernel_start, kernel_end - kernel_start); + memblock_isolate_memory(kernel_start, kernel_end - kernel_start); #ifdef CONFIG_KEXEC_CORE if (crash_mem_map) { @@ -568,6 +568,7 @@ static void __init map_mem(pgd_t *pgdp) for_each_mem_range(i, &start, &end) { if (start >= end) break; + /* * The linear map must allow allocation tags reading/writing * if MTE is present. Otherwise, it has the same attributes as @@ -589,7 +590,6 @@ static void __init map_mem(pgd_t *pgdp) */ __map_memblock(pgdp, kernel_start, kernel_end, PAGE_KERNEL, NO_CONT_MAPPINGS); - memblock_clear_nomap(kernel_start, kernel_end - kernel_start); /* * Use page-level mappings here so that we can shrink the region -- 2.37.2