In the kernel image vmlinux.lds.S linker scripts the .altinstructions and __bug_table sections are 32- or 64-bit aligned because they hold 32- and/or 64-bit values. But for modules the module.lds.S linker script doesn't define a default alignment yet, so the linker chooses the default byte alignment, which then leads to unnecessary unaligned memory accesses at runtime. Usually such unaligned accesses are unnoticed, because either the hardware (as on x86 CPUs) or in-kernel exception handlers (e.g. on hppa or sparc) emulate and fix them up at runtime. On hppa the 32-bit unalignment exception handler was temporarily broken due another bad commit, and as such wrong values were returned on unaligned accesses to the altinstructions table. This then led to undefined behaviour because wrong kernel addresses were patched and we suddenly faced lots of unrelated bugs, as can be seen in this mail thread: https://lore.kernel.org/all/07d91863-dacc-a503-aa2b-05c3b92a1e39@xxxxxxxx/T/#mab602dfa32be5e229d5e192ab012af196d04d75d This patch adds the missing natural alignment for kernel modules to avoid unnecessary (hard- or software-based) fixups. Signed-off-by: Helge Deller <deller@xxxxxx> --- scripts/module.lds.S | 2 ++ 1 file changed, 2 insertions(+) -- v2: updated commit message diff --git a/scripts/module.lds.S b/scripts/module.lds.S index 1d0e1e4dc3d2..3a3aa2354ed8 100644 --- a/scripts/module.lds.S +++ b/scripts/module.lds.S @@ -27,6 +27,8 @@ SECTIONS { .ctors 0 : ALIGN(8) { *(SORT(.ctors.*)) *(.ctors) } .init_array 0 : ALIGN(8) { *(SORT(.init_array.*)) *(.init_array) } + .altinstructions 0 : ALIGN(8) { KEEP(*(.altinstructions)) } + __bug_table 0 : ALIGN(8) { KEEP(*(__bug_table)) } __jump_table 0 : ALIGN(8) { KEEP(*(__jump_table)) } __patchable_function_entries : { *(__patchable_function_entries) }