On Thu, Dec 26, 2024 at 11:07:10PM +0000, Peilin Ye wrote: > > > + if (BPF_ATOMIC_TYPE(insn->imm) == BPF_ATOMIC_LOAD) > > > + ptr = src; > > > + else > > > + ptr = dst; > > > + > > > + if (off) { > > > + emit_a64_mov_i(true, tmp, off, ctx); > > > + emit(A64_ADD(true, tmp, tmp, ptr), ctx); > > > > The mov and add instructions can be optimized to a single A64_ADD_I > > if is_addsub_imm(off) is true. > > Thanks! I'll try this. The following diff seems to work: --- a/arch/arm64/net/bpf_jit_comp.c +++ b/arch/arm64/net/bpf_jit_comp.c @@ -658,9 +658,15 @@ static int emit_atomic_load_store(const struct bpf_insn *insn, struct jit_ctx *c ptr = dst; if (off) { - emit_a64_mov_i(true, tmp, off, ctx); - emit(A64_ADD(true, tmp, tmp, ptr), ctx); - ptr = tmp; + if (is_addsub_imm(off)) { + emit(A64_ADD_I(true, ptr, ptr, off), ctx); + } else if (is_addsub_imm(-off)) { + emit(A64_SUB_I(true, ptr, ptr, -off), ctx); + } else { + emit_a64_mov_i(true, tmp, off, ctx); + emit(A64_ADD(true, tmp, tmp, ptr), ctx); + ptr = tmp; + } } if (arena) { emit(A64_ADD(true, tmp, ptr, arena_vm_base), ctx); I'll include it in the next version. I think the same thing can be done for emit_lse_atomic() and emit_ll_sc_atomic(); let me do that in a separate patch. Thanks, Peilin Ye