On Fri, 6 Nov 2020 19:19:11 +0100 Lorenzo Bianconi <lorenzo@xxxxxxxxxx> wrote: > Convert mlx5 driver to xdp_return_frame_bulk APIs. > > XDP_REDIRECT (upstream codepath): 8.5Mpps > XDP_REDIRECT (upstream codepath + bulking APIs): 10.1Mpps > > Signed-off-by: Lorenzo Bianconi <lorenzo@xxxxxxxxxx> > --- > drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c | 5 ++++- > 1 file changed, 4 insertions(+), 1 deletion(-) This patch is not 100% correct. The bulking need to happen at another level. I have already fixed this up in the patches I'm currently benchmarking with. (Lorenzo is informed and aware) Too many details, but all avail in[1]. [1] https://github.com/xdp-project/xdp-project/blob/master/areas/mem/xdp_bulk_return01.org > diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c > index ae90d533a350..5fdfbf390d5c 100644 > --- a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c > +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c > @@ -369,8 +369,10 @@ static void mlx5e_free_xdpsq_desc(struct mlx5e_xdpsq *sq, > bool recycle) > { > struct mlx5e_xdp_info_fifo *xdpi_fifo = &sq->db.xdpi_fifo; > + struct xdp_frame_bulk bq; > u16 i; > > + bq.xa = NULL; > for (i = 0; i < wi->num_pkts; i++) { > struct mlx5e_xdp_info xdpi = mlx5e_xdpi_fifo_pop(xdpi_fifo); > > @@ -379,7 +381,7 @@ static void mlx5e_free_xdpsq_desc(struct mlx5e_xdpsq *sq, > /* XDP_TX from the XSK RQ and XDP_REDIRECT */ > dma_unmap_single(sq->pdev, xdpi.frame.dma_addr, > xdpi.frame.xdpf->len, DMA_TO_DEVICE); > - xdp_return_frame(xdpi.frame.xdpf); > + xdp_return_frame_bulk(xdpi.frame.xdpf, &bq); > break; > case MLX5E_XDP_XMIT_MODE_PAGE: > /* XDP_TX from the regular RQ */ > @@ -393,6 +395,7 @@ static void mlx5e_free_xdpsq_desc(struct mlx5e_xdpsq *sq, > WARN_ON_ONCE(true); > } > } > + xdp_flush_frame_bulk(&bq); > } > > bool mlx5e_poll_xdpsq_cq(struct mlx5e_cq *cq) -- Best regards, Jesper Dangaard Brouer MSc.CS, Principal Kernel Engineer at Red Hat LinkedIn: http://www.linkedin.com/in/brouer