On Tue, Jul 07, 2020 at 02:42:58PM +0100, Matthew Wilcox wrote: > On Tue, Jul 07, 2020 at 04:15:51PM +0300, Leon Romanovsky wrote: > > +++ b/drivers/infiniband/hw/mlx5/srq_cmd.c > > @@ -83,11 +83,11 @@ struct mlx5_core_srq *mlx5_cmd_get_srq(struct mlx5_ib_dev *dev, u32 srqn) > > struct mlx5_srq_table *table = &dev->srq_table; > > struct mlx5_core_srq *srq; > > > > - xa_lock(&table->array); > > + xa_lock_irq(&table->array); > > srq = xa_load(&table->array, srqn); > > if (srq) > > refcount_inc(&srq->common.refcount); > > - xa_unlock(&table->array); > > + xa_unlock_irq(&table->array); > > This one is correct. > > > @@ -655,11 +655,11 @@ static int srq_event_notifier(struct notifier_block *nb, > > eqe = data; > > srqn = be32_to_cpu(eqe->data.qp_srq.qp_srq_n) & 0xffffff; > > > > - xa_lock(&table->array); > > + xa_lock_irq(&table->array); > > srq = xa_load(&table->array, srqn); > > if (srq) > > refcount_inc(&srq->common.refcount); > > - xa_unlock(&table->array); > > + xa_unlock_irq(&table->array); > > This one is not. srq_event_notifier() is called in irq context, always, > so leave it as xa_lock() / xa_unlock(). > > You could switch to a less-locked model, which would look something like this: > > rcu_read_lock(); > srq = xa_load(&table->array, srqn); > if (srq && !refcount_inc_not_zero(&srq->common.refcount)) > srq = NULL; > rcu_read_unlock(); > > Then you wouldn't need to disable irqs while accessing the array. > But you would need to rcu-free the srqs. Thanks for your feedback, At this point of time, we don't need rcu_* optimization, it is not bottleneck yet :). Thanks