cpumask_weight_gt() is more efficient because it may stop traversing cpumask depending on condition. CC: David S. Miller <davem@xxxxxxxxxxxxx> CC: Eric Dumazet <edumazet@xxxxxxxxxx> CC: Jakub Kicinski <kuba@xxxxxxxxxx> CC: Leon Romanovsky <leon@xxxxxxxxxx> CC: Paolo Abeni <pabeni@xxxxxxxxxx> CC: Saeed Mahameed <saeedm@xxxxxxxxxx> CC: netdev@xxxxxxxxxxxxxxx CC: linux-rdma@xxxxxxxxxxxxxxx CC: linux-kernel@xxxxxxxxxxxxxxx Signed-off-by: Yury Norov <yury.norov@xxxxxxxxx> --- drivers/net/ethernet/mellanox/mlx5/core/irq_affinity.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/irq_affinity.c b/drivers/net/ethernet/mellanox/mlx5/core/irq_affinity.c index 380a208ab137..d57f804ee934 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/irq_affinity.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/irq_affinity.c @@ -58,7 +58,7 @@ irq_pool_request_irq(struct mlx5_irq_pool *pool, const struct cpumask *req_mask) if (err) return ERR_PTR(err); if (pool->irqs_per_cpu) { - if (cpumask_weight(req_mask) > 1) + if (cpumask_weight_gt(req_mask, 1)) /* if req_mask contain more then one CPU, set the least loadad CPU * of req_mask */ -- 2.32.0