Re: [RFC v2 6/9] netfilter: add bpf base hook program generator

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Oct 7, 2022 at 4:45 AM Florian Westphal <fw@xxxxxxxxx> wrote:
>
> Alexei Starovoitov <alexei.starovoitov@xxxxxxxxx> wrote:
> > > +   if (!emit(p, BPF_STX_MEM(BPF_H, BPF_REG_6, BPF_REG_8,
> > > +                            offsetof(struct nf_hook_state, hook_index))))
> > > +           return false;
> > > +   /* arg2: struct nf_hook_state * */
> > > +   if (!emit(p, BPF_MOV64_REG(BPF_REG_2, BPF_REG_6)))
> > > +           return false;
> > > +   /* arg3: original hook return value: (NUM << NF_VERDICT_QBITS | NF_QUEUE) */
> > > +   if (!emit(p, BPF_MOV32_REG(BPF_REG_3, BPF_REG_0)))
> > > +           return false;
> > > +   if (!emit(p, BPF_EMIT_CALL(nf_queue)))
> > > +           return false;
> >
> > here and other CALL work by accident on x84-64.
> > You need to wrap them with BPF_CALL_ and point BPF_EMIT_CALL to that wrapper.
>
> Do you mean this? :
>
> BPF_CALL_3(nf_queue_bpf, struct sk_buff *, skb, struct nf_hook_state *,
>            state, unsigned int, verdict)
> {
>      return nf_queue(skb, state, verdict);
> }

yep.

>
> -       if (!emit(p, BPF_EMIT_CALL(nf_hook_slow)))
> +       if (!emit(p, BPF_EMIT_CALL(nf_hook_slow_bpf)))
>
> ?
>
> If yes, I don't see how this will work for the case where I only have an
> address, i.e.:
>
> if (!emit(p, BPF_EMIT_CALL(h->hook))) ....
>
> (Also, the address might be in a kernel module)
>
> > On x86-64 it will be a nop.
> > On x86-32 it will do quite a bit of work.
>
> If this only a problem for 32bit arches, I could also make this
> 'depends on CONFIG_64BIT'.

If that's acceptable, sure.

> But perhaps I am on the wrong track, I see existing code doing:
>         *insn++ = BPF_EMIT_CALL(__htab_map_lookup_elem);

Yes, because we do:
                /* BPF_EMIT_CALL() assumptions in some of the map_gen_lookup
                 * and other inlining handlers are currently limited to 64 bit
                 * only.
                 */
                if (prog->jit_requested && BITS_PER_LONG == 64 &&


I think you already gate this feature with jit_requested?
Otherwise it's going to be slow in the interpreter.

> (kernel/bpf/hashtab.c).
>
> > > +   prog = bpf_prog_select_runtime(prog, &err);
> > > +   if (err) {
> > > +           bpf_prog_free(prog);
> > > +           return NULL;
> > > +   }
> >
> > Would be good to do bpf_prog_alloc_id() so it can be seen in
> > bpftool prog show.
>
> Thanks a lot for the hint:
>
> 39: unspec  tag 0000000000000000
> xlated 416B  jited 221B  memlock 4096B

Probably should do bpf_prog_calc_tag() too.
And please give it some meaningful name.

> bpftool prog  dump xlated id 39
>    0: (bf) r6 = r1
>    1: (79) r7 = *(u64 *)(r1 +8)
>    2: (b4) w8 = 0
>    3: (85) call ipv6_defrag#526144928
>    4: (55) if r0 != 0x1 goto pc+24
>    5: (bf) r1 = r6
>    6: (04) w8 += 1
>    7: (85) call ipv6_conntrack_in#526206096
>    [..]

Nice.
bpftool prog profile
should work too.



[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux