On Tue, 07 Jan 2020 14:27:41 +0100 Toke Høiland-Jørgensen <toke@xxxxxxxxxx> wrote: > Jesper Dangaard Brouer <brouer@xxxxxxxxxx> writes: > > > On Tue, 07 Jan 2020 12:25:47 +0100 > > Toke Høiland-Jørgensen <toke@xxxxxxxxxx> wrote: > > > >> Björn Töpel <bjorn.topel@xxxxxxxxx> writes: > >> > >> > On Fri, 20 Dec 2019 at 11:30, Toke Høiland-Jørgensen <toke@xxxxxxxxxx> wrote: > >> >> > >> >> Jesper Dangaard Brouer <brouer@xxxxxxxxxx> writes: > >> >> > >> > [...] > >> >> > I have now went over the entire patchset, and everything look perfect, > >> >> > I will go as far as saying it is brilliant. We previously had the > >> >> > issue, that using different redirect maps in a BPF-prog would cause the > >> >> > bulking effect to be reduced, as map_to_flush cause previous map to get > >> >> > flushed. This is now solved :-) > >> >> > >> >> Another thing that occurred to me while thinking about this: Now that we > >> >> have a single flush list, is there any reason we couldn't move the > >> >> devmap xdp_bulk_queue into struct net_device? That way it could also be > >> >> used for the non-map variant of bpf_redirect()? > >> >> > >> > > >> > Indeed! (At least I don't see any blockers...) > >> > >> Cool, that's what I thought. Maybe I'll give that a shot, then, unless > >> you beat me to it ;) > > > > Generally sounds like a good idea. > > > > It this only for devmap xdp_bulk_queue? > > Non-map redirect only supports redirecting across interfaces (the > parameter is an ifindex), so yeah, this would be just for that. Sure, then you don't need to worry about below gotchas. I do like the idea, as this would/should solve the non-map redirect performance issue. > > Some gotchas off the top of my head. > > > > The cpumap also have a struct xdp_bulk_queue, which have a different > > layout. (sidenote: due to BTF we likely want rename that). > > > > If you want to generalize this across all redirect maps type. You > > should know, that it was on purpose that I designed the bulking to be > > map specific, because that allowed each map to control its own optimal > > bulking. E.g. devmap does 16 frames bulking, cpumap does 8 frames (as > > it matches sending 1 cacheline into underlying ptr_ring), xskmap does > > 64 AFAIK (which could hurt-latency, but that is another discussion). > > Björn's patches do leave the per-type behaviour, they just get rid of > the per-map flush queues... :) Yes, I know ;-) -- Best regards, Jesper Dangaard Brouer MSc.CS, Principal Kernel Engineer at Red Hat LinkedIn: http://www.linkedin.com/in/brouer