A multi_v8 barebox with KASAN enabled is 2051804 bytes even after compression and this breaks linking for me: arch/arm/cpu/common.o: in function `global_variable_offset': arch/arm/include/asm/reloc.h:20:(.text.relocate_to_current_adr+0x1c): relocation truncated to fit: R_AARCH64_ADR_PREL_LO21 against symbol `_text' defined in .text section in .tmp_barebox1 arch/arm/include/asm/reloc.h:20:(.text.relocate_to_current_adr+0x40): relocation truncated to fit: R_AARCH64_ADR_PREL_LO21 against symbol `_text' defined in .text section in .tmp_barebox1 This is due to adr's limitation of only addressing bytes +/- 1 MiB from the current PC. We have a solution for this in the form of the adr_l macro, which we define for out-of-line assembly. Opencode this into the inline assembly function, by using adrp to compute the page offset and then add to arrive at the correct offset within the page. Signed-off-by: Ahmad Fatoum <a.fatoum@xxxxxxxxxxxxxx> --- v1 -> v2: - fix typos: s/adrp/adr_l/, add unit (bytes) after barebox size --- arch/arm/include/asm/reloc.h | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/arch/arm/include/asm/reloc.h b/arch/arm/include/asm/reloc.h index 95b4ef0af88b..2d7411ab5284 100644 --- a/arch/arm/include/asm/reloc.h +++ b/arch/arm/include/asm/reloc.h @@ -18,7 +18,8 @@ static inline __prereloc unsigned long global_variable_offset(void) unsigned long text; __asm__ __volatile__( - "adr %0, _text\n" + "adrp %0, _text\n" + "add %0, %0, :lo12:_text\n" : "=r" (text) : : "memory"); -- 2.39.2