Even if CMA is disabled, the for_each_memblock macro expands to run reserve_bootmem once. Hence, reserve_bootmem attempts to reserve location 0 of size 0. Add a check to avoid that. Issue was highlighted during testing with EVA enabled. resrve_bootmem used to exit gracefully when passed arguments to reserve 0 size location at 0 without EVA. But with EVA enabled, macros would point to different addresses and the code would trigger a BUG. Signed-off-by: Zubair Lutfullah Kakakhel <Zubair.Kakakhel@xxxxxxxxxx> Tested-by: Markos Chandras <markos.chandras@xxxxxxxxxx> --- arch/mips/kernel/setup.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/arch/mips/kernel/setup.c b/arch/mips/kernel/setup.c index 938f157..eacfd7d 100644 --- a/arch/mips/kernel/setup.c +++ b/arch/mips/kernel/setup.c @@ -683,7 +683,8 @@ static void __init arch_mem_init(char **cmdline_p) dma_contiguous_reserve(PFN_PHYS(max_low_pfn)); /* Tell bootmem about cma reserved memblock section */ for_each_memblock(reserved, reg) - reserve_bootmem(reg->base, reg->size, BOOTMEM_DEFAULT); + if (reg->size != 0) + reserve_bootmem(reg->base, reg->size, BOOTMEM_DEFAULT); } static void __init resource_init(void) -- 1.9.1