On Fri, 28 May 2021 12:22:40 +0200 Daniel Borkmann <daniel@xxxxxxxxxxxxx> wrote: > On 5/28/21 12:00 PM, Magnus Karlsson wrote: > > On Fri, May 28, 2021 at 11:52 AM Jesper Dangaard Brouer > > <brouer@xxxxxxxxxx> wrote: > >> On Fri, 28 May 2021 17:02:01 +0800 > >> Xuan Zhuo <xuanzhuo@xxxxxxxxxxxxxxxxx> wrote: > >>> On Fri, 28 May 2021 10:55:58 +0200, Toke Høiland-Jørgensen <toke@xxxxxxxxxx> wrote: > >>>> Xuan Zhuo <xuanzhuo@xxxxxxxxxxxxxxxxx> writes: > >>>> > >>>>> In xsk mode, users cannot use AF_PACKET(tcpdump) to observe the current > >>>>> rx/tx data packets. This feature is very important in many cases. So > >>>>> this patch allows AF_PACKET to obtain xsk packages. > >>>> > >>>> You can use xdpdump to dump the packets from the XDP program before it > >>>> gets redirected into the XSK: > >>>> https://github.com/xdp-project/xdp-tools/tree/master/xdp-dump > >>> > >>> Wow, this is a good idea. > >> > >> Yes, it is rather cool (credit to Eelco). Notice the extra info you > >> can capture from 'exit', like XDP return codes, if_index, rx_queue. > >> > >> The tool uses the perf ring-buffer to send/copy data to userspace. > >> This is actually surprisingly fast, but I still think AF_XDP will be > >> faster (but it usually 'steals' the packet). > >> > >> Another (crazy?) idea is to extend this (and xdpdump), is to leverage > >> Hangbin's recent XDP_REDIRECT extension e624d4ed4aa8 ("xdp: Extend > >> xdp_redirect_map with broadcast support"). We now have a > >> xdp_redirect_map flag BPF_F_BROADCAST, what if we create a > >> BPF_F_CLONE_PASS flag? > >> > >> The semantic meaning of BPF_F_CLONE_PASS flag is to copy/clone the > >> packet for the specified map target index (e.g AF_XDP map), but > >> afterwards it does like veth/cpumap and creates an SKB from the > >> xdp_frame (see __xdp_build_skb_from_frame()) and send to netstack. > >> (Feel free to kick me if this doesn't make any sense) > > > > This would be a smooth way to implement clone support for AF_XDP. If > > we had this and someone added AF_XDP support to libpcap, we could both > > capture AF_XDP traffic with tcpdump (using this clone functionality in > > the XDP program) and speed up tcpdump for dumping traffic destined for > > regular sockets. Would that solve your use case Xuan? Note that I have > > not looked into the BPF_F_CLONE_PASS code, so do not know at this > > point what it would take to support this for XSKMAPs. There is no spoon... the BPF_F_CLONE_PASS code is an idea. > > Recently also ended up with something similar for our XDP LB to record pcaps [0] ;) > My question is.. tcpdump doesn't really care where the packet data comes from, > so why not extending libpcap's Linux-related internals to either capture from > perf RB or BPF ringbuf Just want to first mention, that I do like adding a perf ring-buffer (BPF ringbuf) interface to AF_PACKET. But this is basically what xdpdump already does. The cool thing is that it is super flexible for adding extra info like xdpdump does with XDP-return codes. > rather than AF_PACKET sockets? Cloning is slow, and if > you need to end up creating an skb which is then cloned once again inside AF_PACKET > it's even worse. Just relying and reading out, say, perf RB you don't need any > clones at all. Well, this is exactly what we avoid with my idea of BPF_F_CLONE_PASS when combined with AF_XDP. I should explain this idea better. The trick is that AF_XDP have preallocated all the packets it will every use (at setup time). Thus, the AF_XDP copy-mode does no allocations, which is why it is fast (of-cause ZC mode is faster, but copy-mode AF_XDP is also VERY fast!). (Details and step with AF_XDP code notes:) When the xdp_do_redirect happens with ri->flags BPF_F_CLONE_PASS, then the map specific enqueue (e.g. __xsk_map_redirect), will do a copy of the xdp_buff (AF_XDP calls xsk_copy_xdp()) and for AF_XDP we don't need to do a (real) allocation. Instead of freeing the xdp_buff in xsk_rcv() (see call to xdp_return_buff()) then we do the xdp_frame to SKB work. -- Best regards, Jesper Dangaard Brouer MSc.CS, Principal Kernel Engineer at Red Hat LinkedIn: http://www.linkedin.com/in/brouer