> -----Original Message----- > From: Jason Gunthorpe <jgg@xxxxxxxxxx> > Sent: Friday, October 16, 2020 6:05 PM > To: Xiong, Jianxin <jianxin.xiong@xxxxxxxxx> > Cc: linux-rdma@xxxxxxxxxxxxxxx; dri-devel@xxxxxxxxxxxxxxxxxxxxx; Doug Ledford <dledford@xxxxxxxxxx>; Leon Romanovsky > <leon@xxxxxxxxxx>; Sumit Semwal <sumit.semwal@xxxxxxxxxx>; Christian Koenig <christian.koenig@xxxxxxx>; Vetter, Daniel > <daniel.vetter@xxxxxxxxx> > Subject: Re: [PATCH v5 1/5] RDMA/umem: Support importing dma-buf as user memory region > > On Sat, Oct 17, 2020 at 12:57:21AM +0000, Xiong, Jianxin wrote: > > > From: Jason Gunthorpe <jgg@xxxxxxxxxx> > > > Sent: Friday, October 16, 2020 5:28 PM > > > To: Xiong, Jianxin <jianxin.xiong@xxxxxxxxx> > > > Cc: linux-rdma@xxxxxxxxxxxxxxx; dri-devel@xxxxxxxxxxxxxxxxxxxxx; > > > Doug Ledford <dledford@xxxxxxxxxx>; Leon Romanovsky > > > <leon@xxxxxxxxxx>; Sumit Semwal <sumit.semwal@xxxxxxxxxx>; Christian > > > Koenig <christian.koenig@xxxxxxx>; Vetter, Daniel > > > <daniel.vetter@xxxxxxxxx> > > > Subject: Re: [PATCH v5 1/5] RDMA/umem: Support importing dma-buf as > > > user memory region > > > > > > On Thu, Oct 15, 2020 at 03:02:45PM -0700, Jianxin Xiong wrote: > > > > +struct ib_umem *ib_umem_dmabuf_get(struct ib_device *device, > > > > + unsigned long addr, size_t size, > > > > + int dmabuf_fd, int access, > > > > + const struct ib_umem_dmabuf_ops *ops) { > > > > + struct dma_buf *dmabuf; > > > > + struct ib_umem_dmabuf *umem_dmabuf; > > > > + struct ib_umem *umem; > > > > + unsigned long end; > > > > + long ret; > > > > + > > > > + if (check_add_overflow(addr, (unsigned long)size, &end)) > > > > + return ERR_PTR(-EINVAL); > > > > + > > > > + if (unlikely(PAGE_ALIGN(end) < PAGE_SIZE)) > > > > + return ERR_PTR(-EINVAL); > > > > + > > > > + if (unlikely(!ops || !ops->invalidate || !ops->update)) > > > > + return ERR_PTR(-EINVAL); > > > > + > > > > + umem_dmabuf = kzalloc(sizeof(*umem_dmabuf), GFP_KERNEL); > > > > + if (!umem_dmabuf) > > > > + return ERR_PTR(-ENOMEM); > > > > + > > > > + umem_dmabuf->ops = ops; > > > > + INIT_WORK(&umem_dmabuf->work, ib_umem_dmabuf_work); > > > > + > > > > + umem = &umem_dmabuf->umem; > > > > + umem->ibdev = device; > > > > + umem->length = size; > > > > + umem->address = addr; > > > > > > addr here is offset within the dma buf, but this code does nothing with it. > > > > > The current code assumes 0 offset, and 'addr' is the nominal starting > > address of the buffer. If this is to be changed to offset, then yes, > > some more handling is needed as you mentioned below. > > There is no such thing as 'nominal starting address' > > If the user is to provide any argument it can only be offset and length. > > > > Also, dma_buf_map_attachment() does not do the correct dma mapping > > > for RDMA, eg it does not use ib_dma_map(). This is not a problem for > > > mlx5 but it is troublesome to put in the core code. > > > > ib_dma_map() uses dma_map_single(), GPU drivers use dma_map_resource() > > for dma_buf_map_attachment(). They belong to the same family, but take > > different address type (kernel address vs MMIO physical address). > > Could you elaborate what the problem could be for non-mlx5 HCAs? > > They use the virtual dma ops which we intend to remove We can have a check with the dma device before attaching the dma-buf and thus ib_umem_dmabuf_get() call from such drivers would fail. Something like: #ifdef CONFIG_DMA_VIRT_OPS if (device->dma_device->dma_ops == &dma_virt_ops) return ERR_PTR(-EINVAL); #endif > > Jason _______________________________________________ dri-devel mailing list dri-devel@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/dri-devel