On Fri, Aug 23, 2019 at 7:56 AM Július Milan <Julius.Milan@xxxxxxxxxxxxx> wrote: > > Many thanks guys, very appretiated. > > Going to take a look at OVS implementation, but I would like to ensure something before. > > >> I took the _user part and split it into two: > >> > >> "loader" - Executed once to setup environment and once to cleanup, loads _kern.o, attaches it to interface and pin maps under /sys/fs/bpf. > >> > >> and > >> > >> "worker" - Executed as many as required. Every instance loads maps from /sys/fs/bpf, create one AF_XDP sock, update xsks record and start listen/process packets from AF_XDP (in test scenario we are using l2fwd because of write-back). I had to add missing cleanups there( close(fd), munmap()). This should be vpp in final solution. > >> > >> So far so good. > >> > >> I'm unable to start more than one worker due to previously mentioned error. First instance works properly, every other fails on bind (lineno may not match due to local changes): > >> > >> xdpsock_user.c:xsk_configure:595: Assertion failed: bind(sfd, (struct sockaddr *)&sxdp, sizeof(sxdp)) == 0: errno: 16/"Device or resource busy" > >> > >> > > I don't think you can have multiple threads binding one XSK, see > > xsk_bind() in kernel source. > > For AF_XDP in OVS, we create multiple XSKs, non-shared umem and each > > has its thread. > > In OVS, can you bind two sockets with non-shared umem to the same interface? Yes, but to the different queue id on the same interface. So each xsk with non-shared umem bind to distinct queue id of that interface. > Our goal is to have 2 or more processes (VPPs) listening on the same interface via XDP socket, > while XDP program decides where to redirect the packets at the moment. Make sense. Regards, William