On Fri, 29 Jan 2021 23:04:08 +0100 Lorenzo Bianconi <lorenzo@xxxxxxxxxx> wrote: > Split ndo_xdp_xmit and ndo_start_xmit use cases in veth_xdp_rcv routine > in order to alloc skbs in bulk for XDP_PASS verdict. > Introduce xdp_alloc_skb_bulk utility routine to alloc skb bulk list. > The proposed approach has been tested in the following scenario: > > eth (ixgbe) --> XDP_REDIRECT --> veth0 --> (remote-ns) veth1 --> XDP_PASS > > XDP_REDIRECT: xdp_redirect_map bpf sample > XDP_PASS: xdp_rxq_info bpf sample > > traffic generator: pkt_gen sending udp traffic on a remote device > > bpf-next master: ~3.64Mpps > bpf-next + skb bulking allocation: ~3.79Mpps > > Signed-off-by: Lorenzo Bianconi <lorenzo@xxxxxxxxxx> > --- I wanted Lorenzo to test 8 vs 16 bulking, but after much testing and IRC dialog, we cannot find and measure any difference with enough accuracy. Thus: Acked-by: Jesper Dangaard Brouer <brouer@xxxxxxxxxx> > Changes since v2: > - use __GFP_ZERO flag instead of memset > - move some veth_xdp_rcv_batch() logic in veth_xdp_rcv_skb() > > Changes since v1: > - drop patch 2/3, squash patch 1/3 and 3/3 > - set VETH_XDP_BATCH to 16 > - rework veth_xdp_rcv to use __ptr_ring_consume > --- > drivers/net/veth.c | 78 ++++++++++++++++++++++++++++++++++------------ > include/net/xdp.h | 1 + > net/core/xdp.c | 11 +++++++ > 3 files changed, 70 insertions(+), 20 deletions(-) > > diff --git a/drivers/net/veth.c b/drivers/net/veth.c > index 6e03b619c93c..aa1a66ad2ce5 100644 > --- a/drivers/net/veth.c > +++ b/drivers/net/veth.c > @@ -35,6 +35,7 @@ > #define VETH_XDP_HEADROOM (XDP_PACKET_HEADROOM + NET_IP_ALIGN) > > #define VETH_XDP_TX_BULK_SIZE 16 > +#define VETH_XDP_BATCH 16 > -- Best regards, Jesper Dangaard Brouer MSc.CS, Principal Kernel Engineer at Red Hat LinkedIn: http://www.linkedin.com/in/brouer