On Tue, Apr 5, 2022 at 12:55 AM Peter Zijlstra <peterz@xxxxxxxxxxxxx> wrote: > > > Clang can inline emit_indirect_jump() and then folds constants, which > results in: > > | vmlinux.o: warning: objtool: emit_bpf_dispatcher()+0x6a4: relocation to !ENDBR: .text.__x86.indirect_thunk+0x40 > | vmlinux.o: warning: objtool: emit_bpf_dispatcher()+0x67d: relocation to !ENDBR: .text.__x86.indirect_thunk+0x40 > | vmlinux.o: warning: objtool: emit_bpf_tail_call_indirect()+0x386: relocation to !ENDBR: .text.__x86.indirect_thunk+0x20 > | vmlinux.o: warning: objtool: emit_bpf_tail_call_indirect()+0x35d: relocation to !ENDBR: .text.__x86.indirect_thunk+0x20 > > Suppress the optimization such that it must emit a code reference to > the __x86_indirect_thunk_array[] base. > > Signed-off-by: Peter Zijlstra (Intel) <peterz@xxxxxxxxxxxxx> > --- > arch/x86/net/bpf_jit_comp.c | 1 + > 1 file changed, 1 insertion(+) > > --- a/arch/x86/net/bpf_jit_comp.c > +++ b/arch/x86/net/bpf_jit_comp.c > @@ -412,6 +412,7 @@ static void emit_indirect_jump(u8 **ppro > EMIT_LFENCE(); > EMIT2(0xFF, 0xE0 + reg); > } else if (cpu_feature_enabled(X86_FEATURE_RETPOLINE)) { > + OPTIMIZER_HIDE_VAR(reg); > emit_jump(&prog, &__x86_indirect_thunk_array[reg], ip); > } else > #endif Looks good. Please cc bpf@vger and all bpf maintainers in the future. We can take it through the bpf tree if you prefer.