On Mon, Jun 21, 2021 at 07:26:14PM +0300, Oded Gabbay wrote: > On Mon, Jun 21, 2021 at 5:12 PM Jason Gunthorpe <jgg@xxxxxxxx> wrote: > > > > On Mon, Jun 21, 2021 at 03:02:10PM +0200, Greg KH wrote: > > > On Mon, Jun 21, 2021 at 02:28:48PM +0200, Daniel Vetter wrote: > > > > > > Also I'm wondering which is the other driver that we share buffers > > > > with. The gaudi stuff doesn't have real struct pages as backing > > > > storage, it only fills out the dma_addr_t. That tends to blow up with > > > > other drivers, and the only place where this is guaranteed to work is > > > > if you have a dynamic importer which sets the allow_peer2peer flag. > > > > Adding maintainers from other subsystems who might want to chime in > > > > here. So even aside of the big question as-is this is broken. > > > > > > From what I can tell this driver is sending the buffers to other > > > instances of the same hardware, > > > > A dmabuf is consumed by something else in the kernel calling > > dma_buf_map_attachment() on the FD. > > > > What is the other side of this? I don't see any > > dma_buf_map_attachment() calls in drivers/misc, or added in this patch > > set. > > This patch-set is only to enable the support for the exporter side. > The "other side" is any generic RDMA networking device that will want > to perform p2p communication over PCIe with our GAUDI accelerator. > An example is indeed the mlnx5 card which has already integrated > support for being an "importer". It raises the question of how you are testing this if you aren't using it with the only intree driver: mlx5. Jason _______________________________________________ amd-gfx mailing list amd-gfx@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/amd-gfx