Before this patch, when no --image-size passed, initrd_base is caculated using base + len * 4, which is unaligned, and unable to pass check in add_segment_phys_virt(): if (base & (pagesize -1)) { die("Base address: 0x%lx is not page aligned\n", base); } Signed-off-by: Wang Nan <wangnan0 at huawei.com> Cc: Simon Horman <horms at verge.net.au> Cc: Dave Young <dyoung at redhat.com> Cc: Geng Hui <hui.geng at huawei.com> --- kexec/arch/arm/kexec-zImage-arm.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/kexec/arch/arm/kexec-zImage-arm.c b/kexec/arch/arm/kexec-zImage-arm.c index 792187a..4547765 100644 --- a/kexec/arch/arm/kexec-zImage-arm.c +++ b/kexec/arch/arm/kexec-zImage-arm.c @@ -351,7 +351,7 @@ int zImage_arm_load(int argc, char **argv, const char *buf, off_t len, } else { /* Otherwise, assume the maximum kernel compression ratio * is 4, and just to be safe, place ramdisk after that */ - initrd_base = base + len * 4; + initrd_base = base + _ALIGN(len * 4, 4096); } if (use_atags) { -- 1.8.4