On Wed, May 31 2023 at 18:52, Chuck Lever III wrote: >> On May 31, 2023, at 1:11 PM, Thomas Gleixner <tglx@xxxxxxxxxxxxx> wrote: > > This addresses the problem for me with both is_managed = 1 > and is_managed = false: > > diff --git a/drivers/net/ethernet/mellanox/mlx5/core/pci_irq.c b/drivers/net/ethernet/mellanox/mlx5/core/pci_irq.c > index db5687d9fec9..bcf5df316c8f 100644 > --- a/drivers/net/ethernet/mellanox/mlx5/core/pci_irq.c > +++ b/drivers/net/ethernet/mellanox/mlx5/core/pci_irq.c > @@ -570,11 +570,11 @@ int mlx5_irqs_request_vectors(struct mlx5_core_dev *dev, u16 *cpus, int nirqs, > af_desc.is_managed = false; > for (i = 0; i < nirqs; i++) { > + cpumask_clear(&af_desc.mask); > cpumask_set_cpu(cpus[i], &af_desc.mask); > irq = mlx5_irq_request(dev, i + 1, &af_desc, rmap); > if (IS_ERR(irq)) > break; > - cpumask_clear(&af_desc.mask); > irqs[i] = irq; > } > > If you agree this looks reasonable, I can package it with a > proper patch description and send it to Eli and Saeed. It does. I clearly missed that function when going through the possible callchains. Yes, that's definitely broken and the fix is correct. bbac70c74183 ("net/mlx5: Use newer affinity descriptor") is the culprit. Feel free to add: Reviewed-by: Thomas Gleixner <tglx@xxxxxxxxxxxxx>