Re: [PATCH v2 bpf-next 3/4] bpf: inline bpf_map_lookup_elem() for PERCPU_ARRAY maps

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Apr 1, 2024 at 10:02 PM John Fastabend <john.fastabend@xxxxxxxxx> wrote:
>
> Andrii Nakryiko wrote:
> > Using new per-CPU BPF instruction implement inlining for per-CPU ARRAY
> > map lookup helper, if BPF JIT support is present.
> >
> > Signed-off-by: Andrii Nakryiko <andrii@xxxxxxxxxx>
> > ---
> >  kernel/bpf/arraymap.c | 33 +++++++++++++++++++++++++++++++++
> >  1 file changed, 33 insertions(+)
> >
> > diff --git a/kernel/bpf/arraymap.c b/kernel/bpf/arraymap.c
> > index 13358675ff2e..8c1e6d7654bb 100644
> > --- a/kernel/bpf/arraymap.c
> > +++ b/kernel/bpf/arraymap.c
> > @@ -246,6 +246,38 @@ static void *percpu_array_map_lookup_elem(struct bpf_map *map, void *key)
> >       return this_cpu_ptr(array->pptrs[index & array->index_mask]);
> >  }
> >
> > +/* emit BPF instructions equivalent to C code of percpu_array_map_lookup_elem() */
> > +static int percpu_array_map_gen_lookup(struct bpf_map *map, struct bpf_insn *insn_buf)
> > +{
> > +     struct bpf_array *array = container_of(map, struct bpf_array, map);
> > +     struct bpf_insn *insn = insn_buf;
>
> Nit, If you wanted to be consistent with array_*_map_gen_lookup,
>

I didn't in this case, I found these "aliases" more confusing than helpful.

>         const int ret = BPF_REG_0;
>         const int map_ptr = BPF_REG_1;
>         const int index = BPF_REG_2;
>
> But, I think its easier to read as is.
>

Yep, that's what I thought as well.


> > +
> > +     if (!bpf_jit_supports_percpu_insn())
> > +             return -EOPNOTSUPP;
> > +
> > +     if (map->map_flags & BPF_F_INNER_MAP)
> > +             return -EOPNOTSUPP;
> > +
> > +     BUILD_BUG_ON(offsetof(struct bpf_array, map) != 0);
> > +     *insn++ = BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, offsetof(struct bpf_array, pptrs));
> > +
> > +     *insn++ = BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_2, 0);
> > +     if (!map->bypass_spec_v1) {
> > +             *insn++ = BPF_JMP_IMM(BPF_JGE, BPF_REG_0, map->max_entries, 6);
> > +             *insn++ = BPF_ALU32_IMM(BPF_AND, BPF_REG_0, array->index_mask);
> > +     } else {
> > +             *insn++ = BPF_JMP_IMM(BPF_JGE, BPF_REG_0, map->max_entries, 5);
> > +     }
> > +
> > +     *insn++ = BPF_ALU64_IMM(BPF_LSH, BPF_REG_0, 3);
> > +     *insn++ = BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1);
> > +     *insn++ = BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, 0);
> > +     *insn++ = BPF_MOV64_PERCPU_REG(BPF_REG_0, BPF_REG_0);
> > +     *insn++ = BPF_JMP_IMM(BPF_JA, 0, 0, 1);
> > +     *insn++ = BPF_MOV64_IMM(BPF_REG_0, 0);
> > +     return insn - insn_buf;
> > +}
> > +
> >  static void *percpu_array_map_lookup_percpu_elem(struct bpf_map *map, void *key, u32 cpu)
> >  {
> >       struct bpf_array *array = container_of(map, struct bpf_array, map);
> > @@ -776,6 +808,7 @@ const struct bpf_map_ops percpu_array_map_ops = {
> >       .map_free = array_map_free,
> >       .map_get_next_key = array_map_get_next_key,
> >       .map_lookup_elem = percpu_array_map_lookup_elem,
> > +     .map_gen_lookup = percpu_array_map_gen_lookup,
> >       .map_update_elem = array_map_update_elem,
> >       .map_delete_elem = array_map_delete_elem,
> >       .map_lookup_percpu_elem = percpu_array_map_lookup_percpu_elem,
>
> Acked-by: John Fastabend <john.fastabend@xxxxxxxxx>





[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux