On Wed, Feb 26, 2025 at 08:15:37PM +0800, Menglong Dong wrote: > In x86, we need 5-bytes to prepend a "mov %eax xxx" insn, which can hold > a 4-bytes index. So we have following logic: > > 1. use the head 5-bytes if CFI_CLANG is not enabled > 2. use the tail 5-bytes if MITIGATION_CALL_DEPTH_TRACKING is not enabled > 3. compile the kernel with extra 5-bytes padding if > MITIGATION_CALL_DEPTH_TRACKING and CFI_CLANG are both enabled. 3) would result in 16+5 bytes padding, what does that do for alignment? Functions should be 16 byte aligned. Also, did you make sure all the code in arch/x86/kernel/alternative.c still works? Because adding extra padding in the CFI_CLANG case moves where the CFI bytes are emitted and all the CFI rewriting code goes sideways.