On Wed, Jun 12, 2019 at 02:47:20PM +0000, Mark Zhang wrote: > >>>>>> +static int mlx5_ib_counter_unbind_qp(struct ib_qp *qp, bool force) > >>>>>> +{ > >>>>>> + struct mlx5_ib_dev *dev = to_mdev(qp->device); > >>>>>> + struct rdma_counter *counter = qp->counter; > >>>>>> + int err; > >>>>>> + > >>>>>> + err = mlx5_ib_qp_set_counter(qp, NULL); > >>>>>> + if (err && !force) > >>>>>> + return err; > >>>>>> + > >>>>>> + /* > >>>>>> + * Deallocate the counter if this is the last QP bound on it; > >>>>>> + * If @force is set then we still deallocate the q counter > >>>>>> + * no matter if there's any error in previous. used for cases > >>>>>> + * like qp destroy. > >>>>>> + */ > >>>>>> + if (atomic_read(&counter->usecnt) == 1) > >>>>>> + return mlx5_core_dealloc_q_counter(dev->mdev, counter->id); > >>>>> > >>>>> This looks like a nonsense thing to write, what it is trying to do > >>>>> with that atomic? > >>>>> > >>>>> I still can't see why this isn't a normal kref. > >>>>> > >>>> > >>>> Hi Jason, > >>>> > >>>> Have discussed with Leon, unlike other resources, counter alloc/dealloc > >>>> isn't called explicitly. So we need a refcount to record how many QPs > >>>> are bound on this counter, when it comes to 0 then the counter can be > >>>> deallocated. Whether to use atomic or kref the code is similar, it is > >>>> not able to take advantage of kref/completion. > >>> > >>> That doesn't explain the nonsense "atomic_read(&counter->usecnt) == 1" > >>> test > >> > >> It means that all QPs "returned" this counter. > > > > It doesn't make sense to do something like that with an atomic, either > > you know there is no possible atomic_inc/dec at this point (which begs the > > question why test it), or it is racy > > > > How about add a new parameter "last_qp", if it is true then deallocate > the counter? So that the "atomic_read()" check can be performed in core > layer. Maybe, but there still needs to be some kind of locking protecting parallel aomtic_inc/dec I still think this should be a kref. Jason