On Tue, 17 Mar 2020 13:42:43 -0700 Jakub Kicinski <kuba@xxxxxxxxxx> wrote: > On Tue, 17 Mar 2020 18:29:12 +0100 Jesper Dangaard Brouer wrote: > > XDP have evolved to support several frame sizes, but xdp_buff was not > > updated with this information. The frame size (frame_sz) member of > > xdp_buff is introduced to know the real size of the memory the frame is > > delivered in. > > > > When introducing this also make it clear that some tailroom is > > reserved/required when creating SKBs using build_skb(). > > > > It would also have been an option to introduce a pointer to > > data_hard_end (with reserved offset). The advantage with frame_sz is > > that (like rxq) drivers only need to setup/assign this value once per > > NAPI cycle. Due to XDP-generic (and some drivers) it's not possible to > > store frame_sz inside xdp_rxq_info, because it's varies per packet as it > > can be based/depend on packet length. > > Do you reckon it would be too ugly to make xdp-generic suffer and have > it set the length in rxq per packet? We shouldn't handle multiple > packets from the same rxq in parallel, no? It's not only xdp-generic, but also xdp-native drivers like ixgbe and i40e, that have modes (>4K page) where they have per packet frame size. As this kind of mode, have in-practice been "allowed" (with out me realizing it) I expect that other drivers will likely also use this. Regarding the parallel argument, then Intel at LPC had done experiments with "RX-bulking" that required multiple xdp_buff's. It's not exactly parallel, but I see progress in that direction. -- Best regards, Jesper Dangaard Brouer MSc.CS, Principal Kernel Engineer at Red Hat LinkedIn: http://www.linkedin.com/in/brouer