On Mon, Jun 21, 2021 at 02:28:48PM +0200, Daniel Vetter wrote: > On Fri, Jun 18, 2021 at 2:36 PM Oded Gabbay <ogabbay@xxxxxxxxxx> wrote: > > User process might want to share the device memory with another > > driver/device, and to allow it to access it over PCIe (P2P). > > > > To enable this, we utilize the dma-buf mechanism and add a dma-buf > > exporter support, so the other driver can import the device memory and > > access it. > > > > The device memory is allocated using our existing allocation uAPI, > > where the user will get a handle that represents the allocation. > > > > The user will then need to call the new > > uAPI (HL_MEM_OP_EXPORT_DMABUF_FD) and give the handle as a parameter. > > > > The driver will return a FD that represents the DMA-BUF object that > > was created to match that allocation. > > > > Signed-off-by: Oded Gabbay <ogabbay@xxxxxxxxxx> > > Reviewed-by: Tomer Tayar <ttayar@xxxxxxxxx> > > Mission acomplished, we've gone full circle, and the totally-not-a-gpu > driver is now trying to use gpu infrastructure. And seems to have > gained vram meanwhile too. Next up is going to be synchronization > using dma_fence so you can pass buffers back&forth without stalls > among drivers. What's wrong with other drivers using dmabufs and even dma_fence? It's a common problem when shuffling memory around systems, why is that somehow only allowed for gpu drivers? There are many users of these structures in the kernel today that are not gpu drivers (tee, fastrpc, virtio, xen, IB, etc) as this is a common thing that drivers want to do (throw chunks of memory around from userspace to hardware). I'm not trying to be a pain here, but I really do not understand why this is a problem. A kernel api is present, why not use it by other in-kernel drivers? We had the problem in the past where subsystems were trying to create their own interfaces for the same thing, which is why you all created the dmabuf api to help unify this. > Also I'm wondering which is the other driver that we share buffers > with. The gaudi stuff doesn't have real struct pages as backing > storage, it only fills out the dma_addr_t. That tends to blow up with > other drivers, and the only place where this is guaranteed to work is > if you have a dynamic importer which sets the allow_peer2peer flag. > Adding maintainers from other subsystems who might want to chime in > here. So even aside of the big question as-is this is broken. >From what I can tell this driver is sending the buffers to other instances of the same hardware, as that's what is on the other "end" of the network connection. No different from IB's use of RDMA, right? thanks, greg k-h