Bringing these questions to the xdp-newbies list, where they belong. Answers inlined below. On Tue, 20 Aug 2019 21:17:57 +0200 Július Milan <Julius.Milan@xxxxxxxxxxxxx> > > I am writing AF_XDP driver for FDio VPP. I have 2 questions. > That sounds excellent. I was hoping someone would do this for FDio VPP. Do notice that DPDK now also got AF_XDP support. IHMO it makes a lot of sense to implement AF_XDP for FDio, and avoid the DPDK dependency. (AFAIK FDio already got other back-ends than DPDK). > 1 - I created a simple driver according to sample in kernel. I load my XDP > program and pin the maps. > > Then in user application I create a socket, mmap the memory and > push it to xskmap in program. All fine yet. > > Then I start another instance of user application and do the > same, create socket, mmap the memory and trying to > > push it somewhere else into the map. But I got errno: 16 > "Device or resource busy" when trying to bind. > > I guess the memory can’t be mmaped 2 times, but should be > shared, is that correct? I'm cc'ing the AF_XDP experts, as I'm not sure myself. I mostly deal with the in-kernel XDP path. (AF_XDP is essentially kernel bypass :-O) > If so, I am wondering how to solve this nicely. > > Can I store the value of first socket (that created the mmaped > memory) in some special map in my XDP program to avoid complicated > inter-process communication? > > And what happens if this first socket is closed while any other > sockets are still alive (using its shared mmaped memory)? > > What would you recommend? Maybe you have some sample. We just added a sample (by Eelco Cc'ed) into XDP-tutorial: https://github.com/xdp-project/xdp-tutorial/tree/master/advanced03-AF_XDP At-least read the README.org file... to get over the common gotchas. AFAIK the sample doesn't cover your use-case. I guess, we/someone should extend the sample, to illustrate how how multiple interfaces can share the same UMEM. The official documentation is: https://www.kernel.org/doc/html/latest/networking/af_xdp.html > Can I do also atomic operations? (I want it just for such rare > cases as initialization of next socket, to check if there already is > one, that mmaped the memory) > > > > 2 – We want to do also some decap/encap on XDP layer, before > redirecting it to the socket. > Decap on XDP layer is an excellent use-case, that demonstrate cooperation between XDP and AF_XDP kernel-bypass facility. > On RX way it is easy, I do what I want and redirect it to the > socket, but can I achieve the same also on TX? > (Yes, RX case is easy) We don't have an XDP TX hook yet... but so many people have requested this, that we should add this. > Can I catch the packet while TX in XDP and do something with it > (encapsulate it) before sending it out? Usually, we recommend people use the TC egress BPF hook to do the encap in TX. For the AF_XDP use-case, the TC hook isn't there... so that is not an option. Again an argument for an XDP-TX hook. You, could of-cause add the encap header in your AF_XDP userspace program, but I do understand it would make architectural sense that in-kernel XDP would act as a decap/encap layer. > If so what about performance? > For AF_XDP RX-side is really really fast, even in copy-mode. For AF_XDP TX-side in copy-mode, it is rather slow, as it allocates SKBs etc. We could optimize this further but we have not. When enabling AF_XDP zero-copy mode, the TX-side is also super fast. Another hint, for AF_XDP TX-side, remember to "produce" several packets before doing the sendmsg system call. Thus, effectively doing bulking on the TX-ring. > > By the way, great job with XDP ;) Thanks! -- Best regards, Jesper Dangaard Brouer MSc.CS, Principal Kernel Engineer at Red Hat LinkedIn: http://www.linkedin.com/in/brouer