On Tue, 26 Jan 2021 19:41:59 +0100 Lorenzo Bianconi <lorenzo@xxxxxxxxxx> wrote: > Introduce bulking support for XDP_PASS verdict forwarding skbs to > the networking stack > > Signed-off-by: Lorenzo Bianconi <lorenzo@xxxxxxxxxx> > --- > drivers/net/veth.c | 43 ++++++++++++++++++++++++++----------------- > 1 file changed, 26 insertions(+), 17 deletions(-) > > diff --git a/drivers/net/veth.c b/drivers/net/veth.c > index 6e03b619c93c..23137d9966da 100644 > --- a/drivers/net/veth.c > +++ b/drivers/net/veth.c > @@ -35,6 +35,7 @@ > #define VETH_XDP_HEADROOM (XDP_PACKET_HEADROOM + NET_IP_ALIGN) > > #define VETH_XDP_TX_BULK_SIZE 16 > +#define VETH_XDP_BATCH 8 > I suspect that VETH_XDP_BATCH = 8 is not the optimal value. You have taken this value from CPUMAP code, which cannot be generalized to this case. The optimal value for CPUMAP is actually to bulk dequeue 16 frames from ptr_ring, but there is a prefetch in one of the loops, which should not be larger than 10, due to the Intel Line-Fill-Buffer cannot have more than 10 out-standing prefetch instructions in flight. (Yes, I measured this[1] with perf stat, when coding that) Could you please test with 16, to see if results are better? In this veth case, we will likely be started on the same CPU that received the xdp_frames. Thus, things are likely hot in cache, and we don't have to care so much about moving cachelines across CPUs. So, I don't expect it will make much difference. [1] https://github.com/xdp-project/xdp-project/blob/master/areas/cpumap/cpumap02-optimizations.org -- Best regards, Jesper Dangaard Brouer MSc.CS, Principal Kernel Engineer at Red Hat LinkedIn: http://www.linkedin.com/in/brouer