On Tue, Feb 06, 2024 at 09:10:27PM +0200, Tariq Toukan wrote: > > > On 06/02/2024 19:12, Joe Damato wrote: > >On Tue, Feb 06, 2024 at 10:11:28AM +0200, Tariq Toukan wrote: > >> > >> > >>On 06/02/2024 3:03, Joe Damato wrote: > >>>Make mlx5 compatible with the newly added netlink queue GET APIs. > >>> > >>>Signed-off-by: Joe Damato <jdamato@xxxxxxxxxx> > >> > >>+ Gal > >> > >>Hi Joe, > >>Thanks for your patch. > >> > >>We have already prepared a similar patch, and it's part of our internal > >>submission queue, and planned to be submitted soon. > >> > >>Please see my comments below, let us know if you're welling to respin a V2 > >>or wait for our patch. > > > >Do you have a rough estimate on when it'll be submitted? > > > >If it's several months out I'll try again, but if it's expected to be > >submit in the next few weeks I'll wait for your official change. > > It'll be in the next few weeks. OK, well I tweaked the v3 I had queued based on your feedback. I am definitiely not an mlx5 expert, so I have no idea if it's correct. The changes can be summed up as: - mlx5e_activate_channel and mlx5e_deactivate_channel to use netif_queue_set_napi for each mlx5e_txqsq as it is activated/deactivated. I assumed sq->txq_ix is the correct index, but I have no idea. - mlx5e_activate_qos_sq and mlx5e_deactivate_qos_sq to handle the QOS/HTB case, similar to the above. - IRQ storage removed If you think that sounds vaguely correct, I can send the v3 tomorrow when it has been >24hrs as per Rahul's request. > > > >BTW, are the per queue coalesce changes in that same queue? It was > >mentioned previously [1] that this feature is coming after we submit a > >simple attempt as an RFC. If that feature isn't planned or won't be submit > >anytime soon, can you let us know and we can try to attempt an RFC v3 for > >it? > > > > The per queue coalesce series is going through internal code review, and is > expected to also be ready in a matter of a few weeks. OK, great. Thanks for letting me know; we are definitely interested in using this feature. > >[1]: https://lore.kernel.org/lkml/874jj34e67.fsf@xxxxxxxxxx/ > > > >>>--- > >>> drivers/net/ethernet/mellanox/mlx5/core/en.h | 1 + > >>> drivers/net/ethernet/mellanox/mlx5/core/en_main.c | 8 ++++++++ > >>> 2 files changed, 9 insertions(+) > >>> > >>>diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en.h b/drivers/net/ethernet/mellanox/mlx5/core/en.h > >>>index 55c6ace0acd5..3f86ee1831a8 100644 > >>>--- a/drivers/net/ethernet/mellanox/mlx5/core/en.h > >>>+++ b/drivers/net/ethernet/mellanox/mlx5/core/en.h > >>>@@ -768,6 +768,7 @@ struct mlx5e_channel { > >>> u16 qos_sqs_size; > >>> u8 num_tc; > >>> u8 lag_port; > >>>+ unsigned int irq; > >>> /* XDP_REDIRECT */ > >>> struct mlx5e_xdpsq xdpsq; > >>>diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c > >>>index c8e8f512803e..e1bfff1fb328 100644 > >>>--- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c > >>>+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c > >>>@@ -2473,6 +2473,9 @@ static void mlx5e_close_queues(struct mlx5e_channel *c) > >>> mlx5e_close_tx_cqs(c); > >>> mlx5e_close_cq(&c->icosq.cq); > >>> mlx5e_close_cq(&c->async_icosq.cq); > >>>+ > >>>+ netif_queue_set_napi(c->netdev, c->ix, NETDEV_QUEUE_TYPE_TX, NULL); > >>>+ netif_queue_set_napi(c->netdev, c->ix, NETDEV_QUEUE_TYPE_RX, NULL); > >>> } > >>> static u8 mlx5e_enumerate_lag_port(struct mlx5_core_dev *mdev, int ix) > >>>@@ -2558,6 +2561,7 @@ static int mlx5e_open_channel(struct mlx5e_priv *priv, int ix, > >>> c->stats = &priv->channel_stats[ix]->ch; > >>> c->aff_mask = irq_get_effective_affinity_mask(irq); > >>> c->lag_port = mlx5e_enumerate_lag_port(priv->mdev, ix); > >>>+ c->irq = irq; > >>> netif_napi_add(netdev, &c->napi, mlx5e_napi_poll); > >>>@@ -2602,6 +2606,10 @@ static void mlx5e_activate_channel(struct mlx5e_channel *c) > >>> mlx5e_activate_xsk(c); > >>> else > >>> mlx5e_activate_rq(&c->rq); > >>>+ > >>>+ netif_napi_set_irq(&c->napi, c->irq); > >> > >>Can be safely moved to mlx5e_open_channel without interfering with other > >>existing mapping. This would save the new irq field in mlx5e_channel. > > > >Sure, yea, I have that change queued already from last night. > > > > I see now.. I replied before noticing it. > > >>>+ netif_queue_set_napi(c->netdev, c->ix, NETDEV_QUEUE_TYPE_TX, &c->napi); > >> > >>In some configurations we have multiple txqs per channel, so the txq indices > >>are not 1-to-1 with their channel index. > >> > >>This should be called per each txq, with the proper txq index. > >> > >>It should be done also for feture-dedicated SQs (like QOS/HTB). > > > >OK. I think the above makes sense and I'll look into it if I have some time > >this week. > >>>+ netif_queue_set_napi(c->netdev, c->ix, NETDEV_QUEUE_TYPE_RX, &c->napi); > >> > >>For consistency, I'd move this one as well, to match the TX handling. > > > >Sure. > > > >>> } > >>> static void mlx5e_deactivate_channel(struct mlx5e_channel *c)