Re: [PATCH bpf-next 1/4] bpf: add internal-only per-CPU LDX instructions

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Mar 29, 2024 at 5:26 PM Stanislav Fomichev <sdf@xxxxxxxxxx> wrote:
>
> On 03/29, Andrii Nakryiko wrote:
> > Add BPF instructions for working with per-CPU data. These instructions
> > are internal-only and users are not allowed to use them directly. They
> > will only be used for internal inlining optimizations for now.
> >
> > Two different instructions are added. One, with BPF_MEM_PERCPU opcode,
> > performs memory dereferencing of a per-CPU "address" (which is actually
> > an offset). This one is useful when inlined logic needs to load data
> > stored in per-CPU storage (bpf_get_smp_processor_id() is one such
> > example).
> >
> > Another, with BPF_ADDR_PERCPU opcode, performs a resolution of a per-CPU
> > address (offset) stored in a register. This one is useful anywhere where
> > per-CPU data is not read, but rather is returned to user as just
> > absolute raw memory pointer (useful in bpf_map_lookup_elem() helper
> > inlinings, for example).
> >
> > BPF disassembler is also taught to recognize them to support dumping
> > final BPF assembly code (non-JIT'ed version).
> >
> > Add arch-specific way for BPF JITs to mark support for this instructions.
> >
> > This patch also adds support for these instructions in x86-64 BPF JIT.
> >
> > Signed-off-by: Andrii Nakryiko <andrii@xxxxxxxxxx>
> > ---
> >  arch/x86/net/bpf_jit_comp.c | 29 +++++++++++++++++++++++++++++
> >  include/linux/filter.h      | 27 +++++++++++++++++++++++++++
> >  kernel/bpf/core.c           |  5 +++++
> >  kernel/bpf/disasm.c         | 33 ++++++++++++++++++++++++++-------
> >  4 files changed, 87 insertions(+), 7 deletions(-)
> >
> > diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
> > index 3b639d6f2f54..610bbedaae70 100644
> > --- a/arch/x86/net/bpf_jit_comp.c
> > +++ b/arch/x86/net/bpf_jit_comp.c
> > @@ -1910,6 +1910,30 @@ st:                    if (is_imm8(insn->off))
> >                       }
> >                       break;
> >
> > +             /* internal-only per-cpu zero-extending memory load */
> > +             case BPF_LDX | BPF_MEM_PERCPU | BPF_B:
> > +             case BPF_LDX | BPF_MEM_PERCPU | BPF_H:
> > +             case BPF_LDX | BPF_MEM_PERCPU | BPF_W:
> > +             case BPF_LDX | BPF_MEM_PERCPU | BPF_DW:
> > +                     insn_off = insn->off;
> > +                     EMIT1(0x65); /* gs segment modifier */
> > +                     emit_ldx(&prog, BPF_SIZE(insn->code), dst_reg, src_reg, insn_off);
> > +                     break;
> > +
> > +             /* internal-only load-effective-address-of per-cpu offset */
> > +             case BPF_LDX | BPF_ADDR_PERCPU | BPF_DW: {
> > +                     u32 off = (u32)(void *)&this_cpu_off;
> > +
> > +                     /* mov <dst>, <src> (if necessary) */
> > +                     EMIT_mov(dst_reg, src_reg);
> > +
> > +                     /* add <dst>, gs:[<off>] */
> > +                     EMIT2(0x65, add_1mod(0x48, dst_reg));
> > +                     EMIT3(0x03, add_1reg(0x04, dst_reg), 0x25);
> > +                     EMIT(off, 4);
> > +
> > +                     break;
> > +             }
> >               case BPF_STX | BPF_ATOMIC | BPF_W:
> >               case BPF_STX | BPF_ATOMIC | BPF_DW:
> >                       if (insn->imm == (BPF_AND | BPF_FETCH) ||
> > @@ -3365,6 +3389,11 @@ bool bpf_jit_supports_subprog_tailcalls(void)
> >       return true;
> >  }
> >
> > +bool bpf_jit_supports_percpu_insns(void)
> > +{
> > +     return true;
> > +}
> > +
> >  void bpf_jit_free(struct bpf_prog *prog)
> >  {
> >       if (prog->jited) {
> > diff --git a/include/linux/filter.h b/include/linux/filter.h
> > index 44934b968b57..85ffaa238bc1 100644
> > --- a/include/linux/filter.h
> > +++ b/include/linux/filter.h
> > @@ -75,6 +75,14 @@ struct ctl_table_header;
> >  /* unused opcode to mark special load instruction. Same as BPF_MSH */
> >  #define BPF_PROBE_MEM32      0xa0
> >
> > +/* unused opcode to mark special zero-extending per-cpu load instruction. */
> > +#define BPF_MEM_PERCPU       0xc0
> > +
> > +/* unused opcode to mark special load-effective-address-of instruction for
> > + * a given per-CPU offset
> > + */
> > +#define BPF_ADDR_PERCPU      0xe0
> > +
> >  /* unused opcode to mark call to interpreter with arguments */
> >  #define BPF_CALL_ARGS        0xe0
> >
> > @@ -318,6 +326,24 @@ static inline bool insn_is_cast_user(const struct bpf_insn *insn)
> >               .off   = OFF,                                   \
> >               .imm   = 0 })
> >
> > +/* Per-CPU zero-extending memory load (internal-only) */
> > +#define BPF_LDX_MEM_PERCPU(SIZE, DST, SRC, OFF)                      \
> > +     ((struct bpf_insn) {                                    \
> > +             .code  = BPF_LDX | BPF_SIZE(SIZE) | BPF_MEM_PERCPU,\
> > +             .dst_reg = DST,                                 \
> > +             .src_reg = SRC,                                 \
> > +             .off   = OFF,                                   \
> > +             .imm   = 0 })
> > +
>
> [..]
>
> > +/* Load effective address of a given per-CPU offset */
>
> nit: mark this one as internal only as well in the comment?
>

sure, will do, thanks

> (the change overall looks awesome, looking forward to trying it out)





[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux