Re: [PATCH bpf-next 0/9] xdp: Support multiple programs on a single interface through chain calls

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, 4 Oct 2019 at 11:34, Edward Cree <ecree@xxxxxxxxxxxxxx> wrote:
>
> Enforcement is easily dealt with: you just don't give people the caps/
>  perms to load XDP programs directly, so the only way they can do it is
>  via your loader (which you give them a socket or dbus or something to
>  talk to).

Writing this daemon is actually harder than it sounds. Loading eBPF
programs can become fairly complex, with eBPF
maps being shared between different programs. If you want to support
all use cases (which you kind of have to) then you'll end up writing an
RPC wrapper for libbpf, which sounds very painful to me.

So I think for this to work at all, loading has to happen in the user space
components. Only construction of the control flow should be centralised.
This has the knock on effect that these components need
CAP_NET_ADMIN, since too much of eBPF relies on having that
capability right now: various map types, safety features applied to non-root
eBPF, etc. Given time this will be fixed, and maybe these programs can then
just have CAP_BPF or whatever.

I chatted with my colleague Arthur, and we think this might work if all
programs are forced to comply with the xdpcap-style tail call map:
a prog array with MAX_XDP_ACTION slots, which each program
calls into via

  tail_call(map, action);
  return action; // to handle the empty slot case

You could then send (program fd, tail call map fd) along with a priority
of some sort via SCM_RIGHTS. The daemon can then update the tail
call maps as needed. The problem is that this only allows
for linear (not tree-like) control flow.

We'll try and hack up a POC to see if it works at all.

> In any case, it seems like XDP users in userspace still need to
>  communicate with each other in order to update the chain map (which
>  seems to rely on knowing where one's own program fits into it); you
>  suggest they might communicate through the chain map itself, and then
>  veer off into the weeds of finding race-free ways of doing that.  This
>  seems (to me) needlessly complex.

I agree.

> Incidentally, there's also a performance advantage to an eBPF dispatcher,
>  because it means the calls to the individual programs can be JITted and
>  therefore be direct, whereas an in-kernel data-driven dispatcher has to
>  use indirect calls (*waves at spectre*).

This is if we somehow got full blown calls between distinct eBPF programs?

> Maybe Lorenz could describe what he sees as the difficulties with the
>  centralised daemon approach.  In what ways is his current "xdpd"
>  solution unsatisfactory?

xdpd contains the logic to load and install all the different XDP programs
we have. If we want to change one of them we have to redeploy the whole
thing. Same if we want to add one. It also makes life-cycle management
harder than it should be. So our xdpd is not at all like the "loader"
you envision.

-- 
Lorenz Bauer  |  Systems Engineer
6th Floor, County Hall/The Riverside Building, SE1 7PB, UK

www.cloudflare.com



[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux