Plain memset and memcpy are checked by KASAN if enabled before calling unchecked __memset and __memcpy respectively. KASAN uses a kasan_initialized variable as first condition in its memory check, but that only works after relocation. For that reason, we must take care not to invoke KASAN before then. This was done for ARM32, but was missing for ARM64. Do so now. This fixes an annoying issue where network booting a KASAN-enabled barebox twice in a row would fail: The first happened to work because the memory kasan_initialized was placed at was zero. The second would behave erratically, because BSS initialization would silently fail and barebox static storage would then be initialized with the final values of the previous run. Fixes: 932ef7a02e2f ("ARM: Add KASan support") Signed-off-by: Ahmad Fatoum <a.fatoum@xxxxxxxxxxxxxx> --- I wondered if there's a way to print a KASAN error that early, but it's not easy. Calling even global_variable_offset() in kasan_report caused infinite recursion, despite use of __no_sanitize_address. Printing unconditionally could be a way around this. --- arch/arm/cpu/setupc_64.S | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/arch/arm/cpu/setupc_64.S b/arch/arm/cpu/setupc_64.S index d64281c148fc..f38f893be90b 100644 --- a/arch/arm/cpu/setupc_64.S +++ b/arch/arm/cpu/setupc_64.S @@ -14,7 +14,7 @@ ENTRY(setup_c) mov x1, #0 ldr x2, =__bss_stop sub x2, x2, x0 - bl memset /* clear bss */ + bl __memset /* clear bss */ mov x30, x15 ret ENDPROC(setup_c) @@ -63,7 +63,7 @@ ENTRY(relocate_to_adr) sub x19, x19, x1 /* sub address where we are actually running */ add x19, x19, x0 /* add address where we are going to run */ - bl memcpy /* copy binary */ + bl __memcpy /* copy binary */ bl sync_caches_for_execution -- 2.39.2