On Mon, Jan 02, 2023 at 08:34:33PM -0500, Zhu Yanjun wrote: > From: Zhu Yanjun <yanjun.zhu@xxxxxxxxx> > > This is a followup to the EFA dmabuf[1]. Irdma driver currently does > not support on-demand-paging(ODP). So it uses habanalabs as the > dmabuf exporter, and irdma as the importer to allow for peer2peer > access through libibverbs. > > In this commit, the function ib_umem_dmabuf_get_pinned() is used. > This function is introduced in EFA dmabuf[1] which allows the driver > to get a dmabuf umem which is pinned and does not require move_notify > callback implementation. The returned umem is pinned and DMA mapped > like standard cpu umems, and is released through ib_umem_release(). > > [1]https://lore.kernel.org/lkml/20211007114018.GD2688930@xxxxxxxx/t/ > > Signed-off-by: Zhu Yanjun <yanjun.zhu@xxxxxxxxx> > --- > drivers/infiniband/hw/irdma/verbs.c | 158 ++++++++++++++++++++++++++++ > 1 file changed, 158 insertions(+) > > diff --git a/drivers/infiniband/hw/irdma/verbs.c b/drivers/infiniband/hw/irdma/verbs.c > index f6973ea55eda..76dc6e65930a 100644 > --- a/drivers/infiniband/hw/irdma/verbs.c > +++ b/drivers/infiniband/hw/irdma/verbs.c > @@ -2912,6 +2912,163 @@ static struct ib_mr *irdma_reg_user_mr(struct ib_pd *pd, u64 start, u64 len, > return ERR_PTR(err); > } > > +struct ib_mr *irdma_reg_user_mr_dmabuf(struct ib_pd *pd, u64 start, > + u64 len, u64 virt, > + int fd, int access, > + struct ib_udata *udata) > +{ > + struct irdma_device *iwdev = to_iwdev(pd->device); > + struct irdma_ucontext *ucontext; > + struct irdma_pble_alloc *palloc; > + struct irdma_pbl *iwpbl; > + struct irdma_mr *iwmr; > + struct irdma_mem_reg_req req; > + u32 total, stag = 0; > + u8 shadow_pgcnt = 1; > + bool use_pbles = false; > + unsigned long flags; > + int err = -EINVAL; > + struct ib_umem_dmabuf *umem_dmabuf; > + > + if (len > iwdev->rf->sc_dev.hw_attrs.max_mr_size) > + return ERR_PTR(-EINVAL); > + > + if (udata->inlen < IRDMA_MEM_REG_MIN_REQ_LEN) > + return ERR_PTR(-EINVAL); > + > + umem_dmabuf = ib_umem_dmabuf_get_pinned(pd->device, start, len, fd, > + access); > + if (IS_ERR(umem_dmabuf)) { > + err = PTR_ERR(umem_dmabuf); > + ibdev_dbg(&iwdev->ibdev, "Failed to get dmabuf umem[%d]\n", err); > + return ERR_PTR(err); > + } > + > + if (ib_copy_from_udata(&req, udata, min(sizeof(req), udata->inlen))) { > + ib_umem_release(&umem_dmabuf->umem); > + return ERR_PTR(-EFAULT); > + } > + > + iwmr = kzalloc(sizeof(*iwmr), GFP_KERNEL); > + if (!iwmr) { > + ib_umem_release(&umem_dmabuf->umem); > + return ERR_PTR(-ENOMEM); > + } > + > + iwpbl = &iwmr->iwpbl; > + iwpbl->iwmr = iwmr; > + iwmr->region = &umem_dmabuf->umem; > + iwmr->ibmr.pd = pd; > + iwmr->ibmr.device = pd->device; > + iwmr->ibmr.iova = virt; > + iwmr->page_size = PAGE_SIZE; > + > + if (req.reg_type == IRDMA_MEMREG_TYPE_MEM) { > + iwmr->page_size = ib_umem_find_best_pgsz(iwmr->region, > + iwdev->rf->sc_dev.hw_attrs.page_size_cap, > + virt); You can't call rdma_umem_for_each_dma_block() without also calling this function to validate that the page_size passed to rdma_umem_for_each_dma_block() is correct. This seems to be an existing bug, please fix it. Also, is there a reason this code is all duplicated from irdma_reg_user_mr? Please split things up like the other drivers to obtain the umem then use shared code to process the umem as required. Jason