On Mon, Jun 21, 2021 at 9:27 PM Daniel Vetter <daniel.vetter@xxxxxxxx> wrote: > > On Mon, Jun 21, 2021 at 7:55 PM Jason Gunthorpe <jgg@xxxxxxxx> wrote: > > On Mon, Jun 21, 2021 at 07:26:14PM +0300, Oded Gabbay wrote: > > > On Mon, Jun 21, 2021 at 5:12 PM Jason Gunthorpe <jgg@xxxxxxxx> wrote: > > > > > > > > On Mon, Jun 21, 2021 at 03:02:10PM +0200, Greg KH wrote: > > > > > On Mon, Jun 21, 2021 at 02:28:48PM +0200, Daniel Vetter wrote: > > > > > > > > > > Also I'm wondering which is the other driver that we share buffers > > > > > > with. The gaudi stuff doesn't have real struct pages as backing > > > > > > storage, it only fills out the dma_addr_t. That tends to blow up with > > > > > > other drivers, and the only place where this is guaranteed to work is > > > > > > if you have a dynamic importer which sets the allow_peer2peer flag. > > > > > > Adding maintainers from other subsystems who might want to chime in > > > > > > here. So even aside of the big question as-is this is broken. > > > > > > > > > > From what I can tell this driver is sending the buffers to other > > > > > instances of the same hardware, > > > > > > > > A dmabuf is consumed by something else in the kernel calling > > > > dma_buf_map_attachment() on the FD. > > > > > > > > What is the other side of this? I don't see any > > > > dma_buf_map_attachment() calls in drivers/misc, or added in this patch > > > > set. > > > > > > This patch-set is only to enable the support for the exporter side. > > > The "other side" is any generic RDMA networking device that will want > > > to perform p2p communication over PCIe with our GAUDI accelerator. > > > An example is indeed the mlnx5 card which has already integrated > > > support for being an "importer". > > > > It raises the question of how you are testing this if you aren't using > > it with the only intree driver: mlx5. > > For p2p dma-buf there's also amdgpu as a possible in-tree candiate > driver, that's why I added amdgpu folks. Otoh I'm not aware of AI+GPU > combos being much in use, at least with upstream gpu drivers (nvidia > blob is a different story ofc, but I don't care what they do in their > own world). > -Daniel > -- We have/are doing three things: 1. I wrote a simple "importer" driver that emulates an RDMA driver. It calls all the IB_UMEM_DMABUF functions, same as the mlnx5 driver does. And instead of using h/w, it accesses the bar directly. We wrote several tests that emulated the real application. i.e. asking the habanalabs driver to create dma-buf object and export its FD back to userspace. Then the userspace sends the FD to the "importer" driver, which attaches to it, get the SG list and accesses the memory on the GAUDI device. This gave me the confidence that how we integrated the exporter is basically correct/working. 2. We are trying to do a POC with a MLNX card we have, WIP. 3. We are working with another 3rd party RDMA device that its driver is now adding support for being an "importer". also WIP In both points 2&3 We haven't yet reached the actual stage of checking this feature. Another thing I want to emphasize is that we are doing p2p only through the export/import of the FD. We do *not* allow the user to mmap the dma-buf as we do not support direct IO. So there is no access to these pages through the userspace. Thanks, Oded _______________________________________________ amd-gfx mailing list amd-gfx@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/amd-gfx