Re: [PATCH bpf-next v4 0/2] libbpf: adding AF_XDP support

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Feb 15, 2019 at 5:48 PM Daniel Borkmann <daniel@xxxxxxxxxxxxx> wrote:
>
> On 02/13/2019 12:55 PM, Jesper Dangaard Brouer wrote:
> > On Wed, 13 Feb 2019 12:32:47 +0100
> > Magnus Karlsson <magnus.karlsson@xxxxxxxxx> wrote:
> >> On Mon, Feb 11, 2019 at 9:44 PM Jonathan Lemon <jonathan.lemon@xxxxxxxxx> wrote:
> >>> On 8 Feb 2019, at 5:05, Magnus Karlsson wrote:
> >>>
> >>>> This patch proposes to add AF_XDP support to libbpf. The main reason
> >>>> for this is to facilitate writing applications that use AF_XDP by
> >>>> offering higher-level APIs that hide many of the details of the AF_XDP
> >>>> uapi. This is in the same vein as libbpf facilitates XDP adoption by
> >>>> offering easy-to-use higher level interfaces of XDP
> >>>> functionality. Hopefully this will facilitate adoption of AF_XDP, make
> >>>> applications using it simpler and smaller, and finally also make it
> >>>> possible for applications to benefit from optimizations in the AF_XDP
> >>>> user space access code. Previously, people just copied and pasted the
> >>>> code from the sample application into their application, which is not
> >>>> desirable.
> >>>
> >>> I like the idea of encapsulating the boilerplate logic in a library.
> >>>
> >>> I do think there is an important missing piece though - there should be
> >>> some code which queries the netdev for how many queues are attached, and
> >>> create the appropriate number of umem/AF_XDP sockets.
> >>>
> >>> I ran into this issue when testing the current AF_XDP code - on my test
> >>> boxes, the mlx5 card has 55 channels (aka queues), so when the test program
> >>> binds only to channel 0, nothing works as expected, since not all traffic
> >>> is being intercepted.  While obvious in hindsight, this took a while to
> >>> track down.
> >>
> >> Yes, agreed. You are not the first one to stumble upon this problem
> >> :-). Let me think a little bit on how to solve this in a good way. We
> >> need this to be simple and intuitive, as you say.
> >
> > I see people hitting this with AF_XDP all the time... I had some
> > backup-slides[2] in our FOSDEM presentation[1] that describe the issue,
> > give the performance reason why and propose a workaround.
>
> Magnus, I presume you're going to address this for the initial libbpf merge
> since the plan is to make it easier to consume for users?

I think the first thing we need is education and documentation. Have a
FAQ or "common mistakes" section in the Documentation. And of course,
sending Jesper around the world reminding people about this ;-).

To address this on a libbpf interface level, I think the best way is
to reprogram the NIC to send all traffic to the queue that you
provided in the xsk_socket__create call. This "set up NIC routing"
behavior can then be disable with a flag, just as the XDP program
loading can be disabled. The standard config of xsk_socket__create
will then set up as many things for the user as possible just to get
up and running quickly. More advanced users can then disable parts of
it to gain more flexibility. Does this sound OK? Do not want to go the
route of polling multiple sockets and aggregating the traffic as this
will have significant negative performance implications.

/Magnus

> Few more minor items in individual patches, will reply there.
>
> Thanks,
> Daniel
>
> > [1] https://github.com/xdp-project/xdp-project/tree/master/conference/FOSDEM2019
> > [2] https://github.com/xdp-project/xdp-project/blob/master/conference/FOSDEM2019/xdp_building_block.org#backup-slides
> >
> > Alternative work-around
> >   * Create as many AF_XDP sockets as RXQs
> >   * Have userspace poll()/select on all sockets
> >
>



[Index of Archives]     [Linux Networking Development]     [Fedora Linux Users]     [Linux SCTP]     [DCCP]     [Gimp]     [Yosemite Campsites]

  Powered by Linux