Jakub Kicinski <kuba@xxxxxxxxxx> writes: > On Wed, 22 Sep 2021 00:20:19 +0200 Toke Høiland-Jørgensen wrote: >> >> Neither of those are desirable outcomes, I think; and if we add a >> >> separate "XDP multi-buff" switch, we might as well make it system-wide? >> > >> > If we have an internal flag 'this driver supports multi-buf xdp' cannot we >> > make xdp_redirect to linearize in case the packet is being redirected >> > to non multi-buf aware driver (potentially with corresponding non mb aware xdp >> > progs attached) from mb aware driver? >> >> Hmm, the assumption that XDP frames take up at most one page has been >> fundamental from the start of XDP. So what does linearise mean in this >> context? If we get a 9k packet, should we dynamically allocate a >> multi-page chunk of contiguous memory and copy the frame into that, or >> were you thinking something else? > > My $.02 would be to not care about redirect at all. > > It's not like the user experience with redirect is anywhere close > to amazing right now. Besides (with the exception of SW devices which > will likely gain mb support quickly) mixed-HW setups are very rare. > If the source of the redirect supports mb so will likely the target. It's not about device support it's about XDP program support: If I run an MB-aware XDP program on a physical interface and redirect the (MB) frame into a container, and there's an XDP program running inside that container that isn't MB-aware, bugs will ensue. Doesn't matter if the veth driver itself supports MB... We could leave that as a "don't do that, then" kind of thing, but that was what we were proposing (as the "do nothing" option) and got some pushback on, hence why we're having this conversation :) -Toke