Re: [PATCH v1 bpf-next 3/9] bpf: expose how xlated insns map to jitted insns

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Feb 06, 2024 at 06:26:12PM -0800, Alexei Starovoitov wrote:
> On Tue, Feb 6, 2024 at 2:08 AM Anton Protopopov <aspsk@xxxxxxxxxxxxx> wrote:
> >
> > On Mon, Feb 05, 2024 at 05:09:51PM -0800, Alexei Starovoitov wrote:
> > > On Fri, Feb 2, 2024 at 8:34 AM Anton Protopopov <aspsk@xxxxxxxxxxxxx> wrote:
> > > >
> > > > diff --git a/include/linux/bpf.h b/include/linux/bpf.h
> > > > index 4def3dde35f6..bdd6be718e82 100644
> > > > --- a/include/linux/bpf.h
> > > > +++ b/include/linux/bpf.h
> > > > @@ -1524,6 +1524,13 @@ struct bpf_prog_aux {
> > > >         };
> > > >         /* an array of original indexes for all xlated instructions */
> > > >         u32 *orig_idx;
> > > > +       /* for every xlated instruction point to all generated jited
> > > > +        * instructions, if allocated
> > > > +        */
> > > > +       struct {
> > > > +               u32 off;        /* local offset in the jitted code */
> > > > +               u32 len;        /* the total len of generated jit code */
> > > > +       } *xlated_to_jit;
> > >
> > > Simply put Nack to this approach.
> > >
> > > Patches 2 and 3 add an extreme amount of memory overhead.
> > >
> > > As we discussed during office hours we need a "pointer to insn" concept
> > > aka "index on insn".
> > > The verifier would need to track that such things exist and adjust
> > > indices of insns when patching affects those indices.
> > >
> > > For every static branch there will be one such "pointer to insn".
> > > Different algorithms can be used to keep them correct.
> > > The simplest 'lets iterate over all such pointers and update them'
> > > during patch_insn() may even be ok to start.
> > >
> > > Such "pointer to insn" won't add any memory overhead.
> > > When patch+jit is done all such "pointer to insn" are fixed value.
> >
> > Ok, thanks for looking, this makes sense.
> 
> Before jumping into coding I think it would be good to discuss
> the design first.
> I'm thinking such "address of insn" will be similar to
> existing "address of subprog",
> which is encoded in ld_imm64 as BPF_PSEUDO_FUNC.
> "address of insn" would be a bit more involved to track
> during JIT and likely trivial during insn patching,
> since we're already doing imm adjustment for pseudo_func.
> So that part of design is straightforward.
> Implementation in the kernel and libbpf can copy paste from pseudo_func too.

To implement the "primitive version" of static branches, where the
only API is `static_branch_update(xlated off, on/off)` the only
requirement is to build `xlated -> jitted` mapping (which is done
in JIT, after the verification). This can be done in a simplified
version of this patch, without xlated->orig mapping and with
xlated->jit mapping only done to gotol_or_nop instructions.

The "address of insn" appears when we want to provide a more
higher-level API when some object (in user-space or in kernel) keeps
track of one or more gotol_or_nop instructions so that after the
program load this controlling object has a list of xlated offsets.
But this would be a follow-up to the initial static branches patch.

> The question is whether such "address of insn" should be allowed
> in the data section. If so, we need to brainstorm how to
> do it cleanly.
> We had various hacks for similar things in the past. Like prog_array.
> Let's not repeat such mistakes.

So, data section is required for implementing jump tables? Like,
to add a new PTR_TO_LABEL or PTR_TO_INSN data type, and a
corresponding "ptr to insn" object for every occurence of &&label,
which will be adjusted during verification.
Looks to me like this one doesn't require any more API than specifying
a list of &&label occurencies on program load.

For "static keys" though (a feature on top of this patch series) we
need to have access to the corresponding set of adjusted pointers.

Isn't this enough to add something like an array of

  struct insn_ptr {
      u32 type; /* LABEL, STATIC_BRANCH,... */
      u32 insn_off; /* original offset on load */
      union {
          struct label {...};
          struct st_branch { u32 key_id, ..};
      };
  };

to load attrs and then get the xlated list after the program is
loaded? Then no new maps/APIs are needed and this can be extended.




[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux