On Wed, Apr 6, 2022 at 3:46 AM Peter Zijlstra <peterz@xxxxxxxxxxxxx> wrote: > > On Tue, Apr 05, 2022 at 09:58:28AM -0700, Alexei Starovoitov wrote: > > On Tue, Apr 5, 2022 at 12:55 AM Peter Zijlstra <peterz@xxxxxxxxxxxxx> wrote: > > > > > > > > > Clang can inline emit_indirect_jump() and then folds constants, which > > > results in: > > > > > > | vmlinux.o: warning: objtool: emit_bpf_dispatcher()+0x6a4: relocation to !ENDBR: .text.__x86.indirect_thunk+0x40 > > > | vmlinux.o: warning: objtool: emit_bpf_dispatcher()+0x67d: relocation to !ENDBR: .text.__x86.indirect_thunk+0x40 > > > | vmlinux.o: warning: objtool: emit_bpf_tail_call_indirect()+0x386: relocation to !ENDBR: .text.__x86.indirect_thunk+0x20 > > > | vmlinux.o: warning: objtool: emit_bpf_tail_call_indirect()+0x35d: relocation to !ENDBR: .text.__x86.indirect_thunk+0x20 > > > > > > Suppress the optimization such that it must emit a code reference to > > > the __x86_indirect_thunk_array[] base. > > > > > > Signed-off-by: Peter Zijlstra (Intel) <peterz@xxxxxxxxxxxxx> > > > --- > > > arch/x86/net/bpf_jit_comp.c | 1 + > > > 1 file changed, 1 insertion(+) > > > > > > --- a/arch/x86/net/bpf_jit_comp.c > > > +++ b/arch/x86/net/bpf_jit_comp.c > > > @@ -412,6 +412,7 @@ static void emit_indirect_jump(u8 **ppro > > > EMIT_LFENCE(); > > > EMIT2(0xFF, 0xE0 + reg); > > > } else if (cpu_feature_enabled(X86_FEATURE_RETPOLINE)) { > > > + OPTIMIZER_HIDE_VAR(reg); > > > emit_jump(&prog, &__x86_indirect_thunk_array[reg], ip); > > > } else > > > #endif > > > > Looks good. Please cc bpf@vger and all bpf maintainers in the future. > > Oh right, I'll go add an alias for that. > > > We can take it through the bpf tree if you prefer. > > I'll take it through the x86/urgent tree if you don't mind. Sure. Then pls add: Acked-by: Alexei Starovoitov <ast@xxxxxxxxxx>