On Wed, Nov 24, 2021 at 2:43 PM Andrii Nakryiko <andrii.nakryiko@xxxxxxxxx> wrote: > > + /* Implement bpf_arg inline. */ > > + if (prog_type == BPF_PROG_TYPE_TRACING && > > + insn->imm == BPF_FUNC_arg) { > > + /* Load nr_args from ctx - 8 */ > > + insn_buf[0] = BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8); > > + insn_buf[1] = BPF_JMP32_REG(BPF_JGE, BPF_REG_2, BPF_REG_0, 4); > > + insn_buf[2] = BPF_ALU64_IMM(BPF_MUL, BPF_REG_2, 8); > > + insn_buf[3] = BPF_ALU64_REG(BPF_ADD, BPF_REG_2, BPF_REG_1); > > + insn_buf[4] = BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_2, 0); > > + insn_buf[5] = BPF_JMP_A(1); > > + insn_buf[6] = BPF_MOV64_IMM(BPF_REG_0, 0); > > + > > + new_prog = bpf_patch_insn_data(env, i + delta, insn_buf, 7); > > + if (!new_prog) > > + return -ENOMEM; > > + > > + delta += 6; > > + env->prog = prog = new_prog; > > + insn = new_prog->insnsi + i + delta; > > + continue; > > nit: this whole sequence of steps and calculations seems like > something that might be abstracted and hidden behind a macro or helper > func? Not related to your change, though. But wouldn't it be easier to > understand if it was just written as: > > PATCH_INSNS( > BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8); > BPF_JMP32_REG(BPF_JGE, BPF_REG_2, BPF_REG_0, 4); > BPF_ALU64_IMM(BPF_MUL, BPF_REG_2, 8); > BPF_ALU64_REG(BPF_ADD, BPF_REG_2, BPF_REG_1); > BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_2, 0); > BPF_JMP_A(1); > BPF_MOV64_IMM(BPF_REG_0, 0)); Daniel and myself tried to do similar macro magic in the past, but it suffers unnecessary stack increase and extra copies. So eventually we got rid of it. I suggest staying with Jiri's approach. Independent from anything else... Just noticed BPF_MUL in the above... Please use BPF_LSH instead. JITs don't optimize such things. It's a job of gcc/llvm to do so. JITs assume that all normal optimizations were done by the compiler.