barebox remaps all reserved entries as uncached to avoid speculative access into the described regions using remap_range. The ARM implementation requires buffers to be page aligned, which we can't assume unconditionally. For this reason, reserve_sdram_region will align region start and size before mapping uncached. __mmu_init called later on, will remap everything outside the reserved entries cached, e.g. to cache additional DRAM not known at PBL time. No realignment will happen then though triggering the BUG(!IS_ALIGNED) in ARM's arch_remap_range. By moving the realignment before __request_sdram_region(), we ensure that no misaligned memory regions will be passed to arch_remap_range by core code. This fixes chainloading barebox from an older barebox[1] that reserves the FDT prior to relocation. [1]: anything prior to 0b6b146a5508 ("fdt: Do not reserve device tree blob") Reported-by: Uwe Kleine-König <u.kleine-koenig@xxxxxxxxxxxxxx> Signed-off-by: Ahmad Fatoum <a.fatoum@xxxxxxxxxxxxxx> --- common/memory.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/common/memory.c b/common/memory.c index d560d444b0a8..300320f85344 100644 --- a/common/memory.c +++ b/common/memory.c @@ -218,10 +218,6 @@ struct resource *reserve_sdram_region(const char *name, resource_size_t start, { struct resource *res; - res = __request_sdram_region(name, IORESOURCE_BUSY, start, size); - if (IS_ERR(res)) - return ERR_CAST(res); - if (!IS_ALIGNED(start, PAGE_SIZE)) { pr_err("%s: %s start is not page aligned\n", __func__, name); start = ALIGN_DOWN(start, PAGE_SIZE); @@ -232,6 +228,10 @@ struct resource *reserve_sdram_region(const char *name, resource_size_t start, size = ALIGN(size, PAGE_SIZE); } + res = __request_sdram_region(name, IORESOURCE_BUSY, start, size); + if (IS_ERR(res)) + return ERR_CAST(res); + remap_range((void *)start, size, MAP_UNCACHED); return res; -- 2.39.2