On Fri, Nov 23, 2018 at 03:19:20PM +0800, Lijun Ou wrote: > +void hns_roce_srq_event(struct hns_roce_dev *hr_dev, u32 srqn, int event_type) > +{ > + struct hns_roce_srq_table *srq_table = &hr_dev->srq_table; > + struct hns_roce_srq *srq; > + > + spin_lock(&srq_table->lock); > + srq = radix_tree_lookup(&srq_table->tree, > + srqn & (hr_dev->caps.num_srqs - 1)); > + spin_unlock(&srq_table->lock); > + if (srq) { > + atomic_inc(&srq->refcount); This locking arrangment still looks wrong. What prevents srq from becoming freed before the atomic_inc is run? If you put the atomic_inc inside the spinlock then this: + spin_lock_irq(&srq_table->lock); + radix_tree_delete(&srq_table->tree, srq->srqn); + spin_unlock_irq(&srq_table->lock); + + if (atomic_dec_and_test(&srq->refcount)) Is serialized and doesn't have a race anymore. Jason