> -----Original Message----- > Subject: RE: [PATCH v2 1/1] RDMA/mana_ib: Add EQ interrupt support to > mana ib driver. > > > Subject: [PATCH v2 1/1] RDMA/mana_ib: Add EQ interrupt support to mana > > ib driver. > > > > Add EQ interrupt support for mana ib driver. Allocate EQs per ucontext > > to receive interrupt. Attach EQ when CQ is created. Call CQ interrupt > > handler when completion interrupt happens. EQs are destroyed when > ucontext is deallocated. > > > > The change calls some public APIs in mana ethernet driver to allocate > > EQs and other resources. Ehe EQ process routine is also shared by mana > > ethernet and mana ib drivers. > > > > Co-developed-by: Ajay Sharma <sharmaajay@xxxxxxxxxxxxx> > > Signed-off-by: Ajay Sharma <sharmaajay@xxxxxxxxxxxxx> > > Signed-off-by: Wei Hu <weh@xxxxxxxxxxxxx> > > --- > > > > v2: Use ibdev_dbg to print error messages and return -ENOMEN > > when kzalloc fails. > > > > drivers/infiniband/hw/mana/cq.c | 32 ++++- > > drivers/infiniband/hw/mana/main.c | 87 ++++++++++++ > > drivers/infiniband/hw/mana/mana_ib.h | 4 + > > drivers/infiniband/hw/mana/qp.c | 90 +++++++++++- > > .../net/ethernet/microsoft/mana/gdma_main.c | 131 ++++++++++-------- > > drivers/net/ethernet/microsoft/mana/mana_en.c | 1 + > > include/net/mana/gdma.h | 9 +- > > 7 files changed, 290 insertions(+), 64 deletions(-) > > > > diff --git a/drivers/infiniband/hw/mana/cq.c > > b/drivers/infiniband/hw/mana/cq.c index d141cab8a1e6..3cd680e0e753 > > 100644 > > --- a/drivers/infiniband/hw/mana/cq.c > > +++ b/drivers/infiniband/hw/mana/cq.c > > @@ -12,13 +12,20 @@ int mana_ib_create_cq(struct ib_cq *ibcq, const > > struct ib_cq_init_attr *attr, > > struct ib_device *ibdev = ibcq->device; > > struct mana_ib_create_cq ucmd = {}; > > struct mana_ib_dev *mdev; > > + struct gdma_context *gc; > > + struct gdma_dev *gd; > > int err; > > > > mdev = container_of(ibdev, struct mana_ib_dev, ib_dev); > > + gd = mdev->gdma_dev; > > + gc = gd->gdma_context; > > > > if (udata->inlen < sizeof(ucmd)) > > return -EINVAL; > > > > + cq->comp_vector = attr->comp_vector > gc->max_num_queues ? > > + 0 : attr->comp_vector; > > + > > err = ib_copy_from_udata(&ucmd, udata, min(sizeof(ucmd), udata- > > >inlen)); > > if (err) { > > ibdev_dbg(ibdev, > > @@ -69,11 +76,32 @@ int mana_ib_destroy_cq(struct ib_cq *ibcq, struct > > ib_udata *udata) > > struct mana_ib_cq *cq = container_of(ibcq, struct mana_ib_cq, ibcq); > > struct ib_device *ibdev = ibcq->device; > > struct mana_ib_dev *mdev; > > + struct gdma_context *gc; > > + struct gdma_dev *gd; > > + > > > > mdev = container_of(ibdev, struct mana_ib_dev, ib_dev); > > + gd = mdev->gdma_dev; > > + gc = gd->gdma_context; > > > > - mana_ib_gd_destroy_dma_region(mdev, cq->gdma_region); > > - ib_umem_release(cq->umem); > > + > > + > > + if (atomic_read(&ibcq->usecnt) == 0) { > > + mana_ib_gd_destroy_dma_region(mdev, cq->gdma_region); > > Need to check if this function fails. The following code will call kfree(gc- > >cq_table[cq->id]), it's possible that IRQ is happening at the same time if CQ > is not destroyed. > Sure. Will update. > > + ibdev_dbg(ibdev, "freeing gdma cq %p\n", gc->cq_table[cq- > >id]); > > + kfree(gc->cq_table[cq->id]); > > + gc->cq_table[cq->id] = NULL; > > + ib_umem_release(cq->umem); > > + } > > > > return 0; > > } > > + > > +void mana_ib_cq_handler(void *ctx, struct gdma_queue *gdma_cq) { > > + struct mana_ib_cq *cq = ctx; > > + struct ib_device *ibdev = cq->ibcq.device; > > + > > + ibdev_dbg(ibdev, "Enter %s %d\n", __func__, __LINE__); > > This debug message seems overkill? > > > + cq->ibcq.comp_handler(&cq->ibcq, cq->ibcq.cq_context); } > > diff --git a/drivers/infiniband/hw/mana/main.c > > b/drivers/infiniband/hw/mana/main.c > > index 7be4c3adb4e2..e4efbcaed10e 100644 > > --- a/drivers/infiniband/hw/mana/main.c > > +++ b/drivers/infiniband/hw/mana/main.c > > @@ -143,6 +143,81 @@ int mana_ib_dealloc_pd(struct ib_pd *ibpd, struct > > ib_udata *udata) > > return err; > > } > > > > +static void mana_ib_destroy_eq(struct mana_ib_ucontext *ucontext, > > + struct mana_ib_dev *mdev) > > +{ > > + struct gdma_context *gc = mdev->gdma_dev->gdma_context; > > + struct ib_device *ibdev = ucontext->ibucontext.device; > > + struct gdma_queue *eq; > > + int i; > > + > > + if (!ucontext->eqs) > > + return; > > + > > + for (i = 0; i < gc->max_num_queues; i++) { > > + eq = ucontext->eqs[i].eq; > > + if (!eq) > > + continue; > > + > > + mana_gd_destroy_queue(gc, eq); > > + } > > + > > + kfree(ucontext->eqs); > > + ucontext->eqs = NULL; > > + > > + ibdev_dbg(ibdev, "destroyed eq's count %d\n", gc- > >max_num_queues); } > > Will gc->max_num_queues change after destroying a EQ? > I think it will not change. Also the compiler might optimize the code to just read the value once and store it in a register. Thanks, Wei