> > > General issue (that I think must be resolved/discussed as part of this initial > patchset). I was thinking about this issue as well. > > When XDP_REDIRECT'ing a multi-buffer xdp_frame out of another driver's > ndo_xdp_xmit(), what happens if the remote driver doesn't understand the > multi-buffer format? > > My guess it that it will only send the first part of the packet (in the > main page). Fortunately we don't leak memory, because xdp_return_frame() > handle freeing the other segments. I assume this isn't acceptable > behavior... or maybe it is? > > What are our options for handling this: > > 1. Add mb support in ndo_xdp_xmit in every driver? I guess this is the optimal approach. > > 2. Drop xdp->mb frames inside ndo_xdp_xmit (in every driver without support)? Probably this is the easiest solution. Anyway if we drop patch 6/6 this is not a real issue since the driver is not allowed yet to receive frames bigger than one page and we have time to address this issue in each driver. Regards, Lorenzo > > 3. Add core-code check before calling ndo_xdp_xmit()? > > --Jesper > > On Wed, 19 Aug 2020 15:13:45 +0200 Lorenzo Bianconi <lorenzo@xxxxxxxxxx> wrote: > > > Finalize XDP multi-buffer support for mvneta driver introducing the capability > > to map non-linear buffers on tx side. > > Introduce multi-buffer bit (mb) in xdp_frame/xdp_buffer to specify if > > shared_info area has been properly initialized. > > Initialize multi-buffer bit (mb) to 0 in all XDP-capable drivers. > > Add multi-buff support to xdp_return_{buff/frame} utility routines. > > > > Changes since RFC: > > - squash multi-buffer bit initialization in a single patch > > - add mvneta non-linear XDP buff support for tx side > > -- > Best regards, > Jesper Dangaard Brouer > MSc.CS, Principal Kernel Engineer at Red Hat > LinkedIn: http://www.linkedin.com/in/brouer >