Re: [xdp-hints] Re: [PATCH bpf-next v7 15/17] net/mlx5e: Introduce wrapper for xdp_buff

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On 12/01/2023 23:55, Toke Høiland-Jørgensen wrote:
Toke Høiland-Jørgensen <toke@xxxxxxxxxx> writes:

Stanislav Fomichev <sdf@xxxxxxxxxx> writes:

On Thu, Jan 12, 2023 at 12:07 AM Tariq Toukan <ttoukan.linux@xxxxxxxxx> wrote:



On 12/01/2023 2:32, Stanislav Fomichev wrote:
From: Toke Høiland-Jørgensen <toke@xxxxxxxxxx>

Preparation for implementing HW metadata kfuncs. No functional change.

Cc: Tariq Toukan <tariqt@xxxxxxxxxx>
Cc: Saeed Mahameed <saeedm@xxxxxxxxxx>
Cc: John Fastabend <john.fastabend@xxxxxxxxx>
Cc: David Ahern <dsahern@xxxxxxxxx>
Cc: Martin KaFai Lau <martin.lau@xxxxxxxxx>
Cc: Jakub Kicinski <kuba@xxxxxxxxxx>
Cc: Willem de Bruijn <willemb@xxxxxxxxxx>
Cc: Jesper Dangaard Brouer <brouer@xxxxxxxxxx>
Cc: Anatoly Burakov <anatoly.burakov@xxxxxxxxx>
Cc: Alexander Lobakin <alexandr.lobakin@xxxxxxxxx>
Cc: Magnus Karlsson <magnus.karlsson@xxxxxxxxx>
Cc: Maryam Tahhan <mtahhan@xxxxxxxxxx>
Cc: xdp-hints@xxxxxxxxxxxxxxx
Cc: netdev@xxxxxxxxxxxxxxx
Signed-off-by: Toke Høiland-Jørgensen <toke@xxxxxxxxxx>
Signed-off-by: Stanislav Fomichev <sdf@xxxxxxxxxx>
---
   drivers/net/ethernet/mellanox/mlx5/core/en.h  |  1 +
   .../net/ethernet/mellanox/mlx5/core/en/xdp.c  |  3 +-
   .../net/ethernet/mellanox/mlx5/core/en/xdp.h  |  6 +-
   .../ethernet/mellanox/mlx5/core/en/xsk/rx.c   | 25 ++++----
   .../net/ethernet/mellanox/mlx5/core/en_rx.c   | 58 +++++++++----------
   5 files changed, 50 insertions(+), 43 deletions(-)

diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en.h b/drivers/net/ethernet/mellanox/mlx5/core/en.h
index 2d77fb8a8a01..af663978d1b4 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en.h
@@ -469,6 +469,7 @@ struct mlx5e_txqsq {
   union mlx5e_alloc_unit {
       struct page *page;
       struct xdp_buff *xsk;
+     struct mlx5e_xdp_buff *mxbuf;

In XSK files below you mix usage of both alloc_units[page_idx].mxbuf and
alloc_units[page_idx].xsk, while both fields share the memory of a union.

As struct mlx5e_xdp_buff wraps struct xdp_buff, I think that you just
need to change the existing xsk field type from struct xdp_buff *xsk
into struct mlx5e_xdp_buff *xsk and align the usage.

Hmmm, good point. I'm actually not sure how it works currently.
mlx5e_alloc_unit.mxbuf doesn't seem to be initialized anywhere? Toke,
am I missing something?

It's initialised piecemeal in different places; but yeah, we're mixing
things a bit...

I'm thinking about something like this:

diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en.h
b/drivers/net/ethernet/mellanox/mlx5/core/en.h
index af663978d1b4..2d77fb8a8a01 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en.h
@@ -469,7 +469,6 @@ struct mlx5e_txqsq {
  union mlx5e_alloc_unit {
         struct page *page;
         struct xdp_buff *xsk;
-       struct mlx5e_xdp_buff *mxbuf;
  };

Hmm, for consistency with the non-XSK path we should rather go the other
direction and lose the xsk member, moving everything to mxbuf? Let me
give that a shot...

Something like the below?

-Toke

diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en.h b/drivers/net/ethernet/mellanox/mlx5/core/en.h
index 6de02d8aeab8..cb9cdb6421c5 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en.h
@@ -468,7 +468,6 @@ struct mlx5e_txqsq {
union mlx5e_alloc_unit {
  	struct page *page;
-	struct xdp_buff *xsk;
  	struct mlx5e_xdp_buff *mxbuf;
  };
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.h b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.h
index cb568c62aba0..95694a25ec31 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.h
@@ -33,6 +33,7 @@
  #define __MLX5_EN_XDP_H__
#include <linux/indirect_call_wrapper.h>
+#include <net/xdp_sock_drv.h>
#include "en.h"
  #include "en/txrx.h"
@@ -112,6 +113,21 @@ static inline void mlx5e_xmit_xdp_doorbell(struct mlx5e_xdpsq *sq)
  	}
  }
+static inline struct mlx5e_xdp_buff *mlx5e_xsk_buff_alloc(struct xsk_buff_pool *pool)
+{
+	return (struct mlx5e_xdp_buff *)xsk_buff_alloc(pool);

What about the space needed for the rq / cqe fields? xsk_buff_alloc won't allocate it.

+}
+
+static inline void mlx5e_xsk_buff_free(struct mlx5e_xdp_buff *mxbuf)
+{
+	xsk_buff_free(&mxbuf->xdp);
+}
+
+static inline dma_addr_t mlx5e_xsk_buff_xdp_get_frame_dma(struct mlx5e_xdp_buff *mxbuf)
+{
+	return xsk_buff_xdp_get_frame_dma(&mxbuf->xdp);
+}
+
  /* Enable inline WQEs to shift some load from a congested HCA (HW) to
   * a less congested cpu (SW).
   */
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/rx.c b/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/rx.c
index 8bf3029abd3c..1f166dbb7f22 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/rx.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/rx.c
@@ -3,7 +3,6 @@
#include "rx.h"
  #include "en/xdp.h"
-#include <net/xdp_sock_drv.h>
  #include <linux/filter.h>
/* RX data path */
@@ -21,7 +20,7 @@ int mlx5e_xsk_alloc_rx_mpwqe(struct mlx5e_rq *rq, u16 ix)
  	if (unlikely(!xsk_buff_can_alloc(rq->xsk_pool, rq->mpwqe.pages_per_wqe)))
  		goto err;
- BUILD_BUG_ON(sizeof(wi->alloc_units[0]) != sizeof(wi->alloc_units[0].xsk));
+	BUILD_BUG_ON(sizeof(wi->alloc_units[0]) != sizeof(wi->alloc_units[0].mxbuf));
  	XSK_CHECK_PRIV_TYPE(struct mlx5e_xdp_buff);
  	batch = xsk_buff_alloc_batch(rq->xsk_pool, (struct xdp_buff **)wi->alloc_units,
  				     rq->mpwqe.pages_per_wqe);

This batching API gets broken as well...
xsk_buff_alloc_batch fills an array of struct xdp_buff pointers, it cannot correctly act on the array of struct mlx5e_xdp_buff, as it contains additional fields.

Maybe letting mlx5e_xdp_buff point to its struct xdp_buff (instead of wrapping it) will solve the problems here, then we'll loop over the xdp_buff * array and copy the pointers into the struct mlx5e_xdp_buff * array.
Need to give it deeper thoughts...

struct mlx5e_xdp_buff {
	struct xdp_buff *xdp;
	struct mlx5_cqe64 *cqe;
	struct mlx5e_rq *rq;
};


@@ -33,8 +32,8 @@ int mlx5e_xsk_alloc_rx_mpwqe(struct mlx5e_rq *rq, u16 ix)
  	 * the first error, which will mean there are no more valid descriptors.
  	 */
  	for (; batch < rq->mpwqe.pages_per_wqe; batch++) {
-		wi->alloc_units[batch].xsk = xsk_buff_alloc(rq->xsk_pool);
-		if (unlikely(!wi->alloc_units[batch].xsk))
+		wi->alloc_units[batch].mxbuf = mlx5e_xsk_buff_alloc(rq->xsk_pool);
+		if (unlikely(!wi->alloc_units[batch].mxbuf))
  			goto err_reuse_batch;
  	}
@@ -44,7 +43,7 @@ int mlx5e_xsk_alloc_rx_mpwqe(struct mlx5e_rq *rq, u16 ix) if (likely(rq->mpwqe.umr_mode == MLX5E_MPWRQ_UMR_MODE_ALIGNED)) {
  		for (i = 0; i < batch; i++) {
-			dma_addr_t addr = xsk_buff_xdp_get_frame_dma(wi->alloc_units[i].xsk);
+			dma_addr_t addr = mlx5e_xsk_buff_xdp_get_frame_dma(wi->alloc_units[i].mxbuf);
umr_wqe->inline_mtts[i] = (struct mlx5_mtt) {
  				.ptag = cpu_to_be64(addr | MLX5_EN_WR),
@@ -53,7 +52,7 @@ int mlx5e_xsk_alloc_rx_mpwqe(struct mlx5e_rq *rq, u16 ix)
  		}
  	} else if (unlikely(rq->mpwqe.umr_mode == MLX5E_MPWRQ_UMR_MODE_UNALIGNED)) {
  		for (i = 0; i < batch; i++) {
-			dma_addr_t addr = xsk_buff_xdp_get_frame_dma(wi->alloc_units[i].xsk);
+			dma_addr_t addr = mlx5e_xsk_buff_xdp_get_frame_dma(wi->alloc_units[i].mxbuf);
umr_wqe->inline_ksms[i] = (struct mlx5_ksm) {
  				.key = rq->mkey_be,
@@ -65,7 +64,7 @@ int mlx5e_xsk_alloc_rx_mpwqe(struct mlx5e_rq *rq, u16 ix)
  		u32 mapping_size = 1 << (rq->mpwqe.page_shift - 2);
for (i = 0; i < batch; i++) {
-			dma_addr_t addr = xsk_buff_xdp_get_frame_dma(wi->alloc_units[i].xsk);
+			dma_addr_t addr = mlx5e_xsk_buff_xdp_get_frame_dma(wi->alloc_units[i].mxbuf);
umr_wqe->inline_ksms[i << 2] = (struct mlx5_ksm) {
  				.key = rq->mkey_be,
@@ -91,7 +90,7 @@ int mlx5e_xsk_alloc_rx_mpwqe(struct mlx5e_rq *rq, u16 ix)
  		__be32 frame_size = cpu_to_be32(rq->xsk_pool->chunk_size);
for (i = 0; i < batch; i++) {
-			dma_addr_t addr = xsk_buff_xdp_get_frame_dma(wi->alloc_units[i].xsk);
+			dma_addr_t addr = mlx5e_xsk_buff_xdp_get_frame_dma(wi->alloc_units[i].mxbuf);
umr_wqe->inline_klms[i << 1] = (struct mlx5_klm) {
  				.key = rq->mkey_be,
@@ -137,7 +136,7 @@ int mlx5e_xsk_alloc_rx_mpwqe(struct mlx5e_rq *rq, u16 ix)
err_reuse_batch:
  	while (--batch >= 0)
-		xsk_buff_free(wi->alloc_units[batch].xsk);
+		mlx5e_xsk_buff_free(wi->alloc_units[batch].mxbuf);
err:
  	rq->stats->buff_alloc_err++;
@@ -156,7 +155,7 @@ int mlx5e_xsk_alloc_rx_wqes_batched(struct mlx5e_rq *rq, u16 ix, int wqe_bulk)
  	 * allocate XDP buffers straight into alloc_units.
  	 */
  	BUILD_BUG_ON(sizeof(rq->wqe.alloc_units[0]) !=
-		     sizeof(rq->wqe.alloc_units[0].xsk));
+		     sizeof(rq->wqe.alloc_units[0].mxbuf));
  	buffs = (struct xdp_buff **)rq->wqe.alloc_units;
  	contig = mlx5_wq_cyc_get_size(wq) - ix;
  	if (wqe_bulk <= contig) {
@@ -177,8 +176,9 @@ int mlx5e_xsk_alloc_rx_wqes_batched(struct mlx5e_rq *rq, u16 ix, int wqe_bulk)
  		/* Assumes log_num_frags == 0. */
  		frag = &rq->wqe.frags[j];
- addr = xsk_buff_xdp_get_frame_dma(frag->au->xsk);
+		addr = mlx5e_xsk_buff_xdp_get_frame_dma(frag->au->mxbuf);
  		wqe->data[0].addr = cpu_to_be64(addr + rq->buff.headroom);
+		frag->au->mxbuf->rq = rq;
  	}
return alloc;
@@ -199,12 +199,13 @@ int mlx5e_xsk_alloc_rx_wqes(struct mlx5e_rq *rq, u16 ix, int wqe_bulk)
  		/* Assumes log_num_frags == 0. */
  		frag = &rq->wqe.frags[j];
- frag->au->xsk = xsk_buff_alloc(rq->xsk_pool);
-		if (unlikely(!frag->au->xsk))
+		frag->au->mxbuf = mlx5e_xsk_buff_alloc(rq->xsk_pool);
+		if (unlikely(!frag->au->mxbuf))
  			return i;
- addr = xsk_buff_xdp_get_frame_dma(frag->au->xsk);
+		addr = mlx5e_xsk_buff_xdp_get_frame_dma(frag->au->mxbuf);
  		wqe->data[0].addr = cpu_to_be64(addr + rq->buff.headroom);
+		frag->au->mxbuf->rq = rq;
  	}
return wqe_bulk;
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
index 7b08653be000..4313165709cb 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
@@ -41,7 +41,6 @@
  #include <net/gro.h>
  #include <net/udp.h>
  #include <net/tcp.h>
-#include <net/xdp_sock_drv.h>
  #include "en.h"
  #include "en/txrx.h"
  #include "en_tc.h"
@@ -434,7 +433,7 @@ static inline void mlx5e_free_rx_wqe(struct mlx5e_rq *rq,
  		 * put into the Reuse Ring, because there is no way to return
  		 * the page to the userspace when the interface goes down.
  		 */
-		xsk_buff_free(wi->au->xsk);
+		mlx5e_xsk_buff_free(wi->au->mxbuf);
  		return;
  	}
@@ -515,7 +514,7 @@ mlx5e_free_rx_mpwqe(struct mlx5e_rq *rq, struct mlx5e_mpw_info *wi, bool recycle
  		 */
  		for (i = 0; i < rq->mpwqe.pages_per_wqe; i++)
  			if (no_xdp_xmit || !test_bit(i, wi->xdp_xmit_bitmap))
-				xsk_buff_free(alloc_units[i].xsk);
+				mlx5e_xsk_buff_free(alloc_units[i].mxbuf);
  	} else {
  		for (i = 0; i < rq->mpwqe.pages_per_wqe; i++)
  			if (no_xdp_xmit || !test_bit(i, wi->xdp_xmit_bitmap))




[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux