Re: AF_XDP integration with FDio VPP? (Was: Questions about XDP)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi all,

I'd like to join the questions too, specifically more oriented towards
AF_XDP devs.
I understand why the choice of SPSC ring queue design, however what
would be the cost of a MPSC design (Multiple Producer Single
Consumer)? In order to have a single UMEM available to the entire net
device (i.e. bound to all net device's hardware queues), imho this
would have a huge impact because now, among other things, one wouldn't
have to bother with ethtool. And do we really want to question the
convenience of "UMEM per hardware queue" vs "UMEM per net device"?
Btw, congrats for the great work guys.

Il giorno sab 24 ago 2019 alle ore 01:29 William Tu
<u9012063@xxxxxxxxx> ha scritto:
>
> On Fri, Aug 23, 2019 at 7:56 AM Július Milan <Julius.Milan@xxxxxxxxxxxxx> wrote:
> >
> > Many thanks guys, very appretiated.
> >
> > Going to take a look at OVS implementation, but I would like to ensure something before.
> >
> > >> I took the _user part and split it into two:
> > >>
> > >> "loader" -  Executed once to setup environment and once to cleanup, loads _kern.o, attaches it to interface and pin maps under /sys/fs/bpf.
> > >>
> > >> and
> > >>
> > >> "worker" - Executed as many as required. Every instance loads maps from /sys/fs/bpf, create one AF_XDP sock, update xsks record and start listen/process packets from AF_XDP (in test scenario we are using l2fwd because of write-back). I had to add missing cleanups there( close(fd), munmap()). This should be vpp in final solution.
> > >>
> > >> So far so good.
> > >>
> > >> I'm unable to start more than one worker due to previously mentioned error. First instance works properly, every other fails on bind (lineno may not match due to local changes):
> > >>
> > >> xdpsock_user.c:xsk_configure:595: Assertion failed: bind(sfd, (struct sockaddr *)&sxdp, sizeof(sxdp)) == 0: errno: 16/"Device or resource busy"
> > >>
> > >>
> > > I don't think you can have multiple threads binding one XSK, see
> > > xsk_bind() in kernel source.
> > > For AF_XDP in OVS, we create multiple XSKs, non-shared umem and each
> > > has its thread.
> >
> > In OVS, can you bind two sockets with non-shared umem to the same interface?
>
> Yes, but to the different queue id on the same interface.
> So each xsk with non-shared umem bind to distinct queue id of that interface.
>
> > Our goal is to have 2 or more processes (VPPs) listening on the same interface via XDP socket,
> > while XDP program decides where to redirect the packets at the moment.
> Make sense.
>
> Regards,
> William




[Index of Archives]     [Linux Networking Development]     [Fedora Linux Users]     [Linux SCTP]     [DCCP]     [Gimp]     [Yosemite Campsites]

  Powered by Linux