On 2020/1/10 23:26, Jason Gunthorpe wrote: > On Sat, Dec 28, 2019 at 11:28:54AM +0800, Yixian Liu wrote: >> +void init_flush_work(struct hns_roce_dev *hr_dev, struct hns_roce_qp *hr_qp) >> +{ >> + struct hns_roce_work *flush_work; >> + >> + flush_work = kzalloc(sizeof(struct hns_roce_work), GFP_ATOMIC); >> + if (!flush_work) >> + return; > > You changed it to only queue once, so why do we need the allocation > now? That was the whole point.. Hi Jason, The flush work is queued **not only once**. As the flag being_pushed is set to 0 during the process of modifying qp like this: hns_roce_v2_modify_qp { ... if (new_state == IB_QPS_ERR) { spin_lock_irqsave(&hr_qp->sq.lock, sq_flag); ... hr_qp->state = IB_QPS_ERR; hr_qp->being_push = 0; ... } ... } which means the new updated PI value needs to be updated with initializing a new flush work. Thus, maybe there are two flush work in the workqueue. Thus, we still need the allocation here. > > And the other patch shouldn't be manipulating being_pushed without > some kind of locking Agree. It needs to hold the spin lock of sq and rq when updating it in modify qp, will fix next version. > > Jason > >