On Tue, Aug 22, 2023 at 08:27:25AM +0000, ynachum@xxxxxxxxxx wrote: > From: Yonatan Nachum <ynachum@xxxxxxxxxx> > > When trying to destroy QP or CQ, we first decrease the refcount and > potentially free memory regions allocated for the object and then > request the device to destroy the object. If the device fails, the > object isn't fully destroyed so the user/IB core can try to destroy the > object again which will lead to underflow when trying to decrease an > already zeroed refcount. > Deallocate resources in reverse order of allocating them to safely free > them. > > Reviewed-by: Michael Margolin <mrgolin@xxxxxxxxxx> > Reviewed-by: Yossi Leybovich <sleybo@xxxxxxxxxx> > Signed-off-by: Yonatan Nachum <ynachum@xxxxxxxxxx> > --- > drivers/infiniband/hw/efa/efa_verbs.c | 6 +++--- > 1 file changed, 3 insertions(+), 3 deletions(-) Applied together with the following Fixes line Fixes: ff6629f88c52 ("RDMA/efa: Do not delay freeing of DMA pages") Thanks > > diff --git a/drivers/infiniband/hw/efa/efa_verbs.c b/drivers/infiniband/hw/efa/efa_verbs.c > index 7a27d79c0541..0f8ca99d0827 100644 > --- a/drivers/infiniband/hw/efa/efa_verbs.c > +++ b/drivers/infiniband/hw/efa/efa_verbs.c > @@ -453,12 +453,12 @@ int efa_destroy_qp(struct ib_qp *ibqp, struct ib_udata *udata) > > ibdev_dbg(&dev->ibdev, "Destroy qp[%u]\n", ibqp->qp_num); > > - efa_qp_user_mmap_entries_remove(qp); > - > err = efa_destroy_qp_handle(dev, qp->qp_handle); > if (err) > return err; > > + efa_qp_user_mmap_entries_remove(qp); > + > if (qp->rq_cpu_addr) { > ibdev_dbg(&dev->ibdev, > "qp->cpu_addr[0x%p] freed: size[%lu], dma[%pad]\n", > @@ -1017,8 +1017,8 @@ int efa_destroy_cq(struct ib_cq *ibcq, struct ib_udata *udata) > "Destroy cq[%d] virt[0x%p] freed: size[%lu], dma[%pad]\n", > cq->cq_idx, cq->cpu_addr, cq->size, &cq->dma_addr); > > - efa_cq_user_mmap_entries_remove(cq); > efa_destroy_cq_idx(dev, cq->cq_idx); > + efa_cq_user_mmap_entries_remove(cq); > if (cq->eq) { > xa_erase(&dev->cqs_xa, cq->cq_idx); > synchronize_irq(cq->eq->irq.irqn); > -- > 2.40.1 >