On Thu, Sep 17, 2020 at 04:54:44PM +0200, Christian König wrote: > Am 17.09.20 um 16:35 schrieb Jason Gunthorpe: > > On Thu, Sep 17, 2020 at 02:24:29PM +0200, Christian König wrote: > > > Am 17.09.20 um 14:18 schrieb Jason Gunthorpe: > > > > On Thu, Sep 17, 2020 at 02:03:48PM +0200, Christian König wrote: > > > > > Am 17.09.20 um 13:31 schrieb Jason Gunthorpe: > > > > > > On Thu, Sep 17, 2020 at 10:09:12AM +0200, Daniel Vetter wrote: > > > > > > > > > > > > > Yeah, but it doesn't work when forwarding from the drm chardev to the > > > > > > > dma-buf on the importer side, since you'd need a ton of different > > > > > > > address spaces. And you still rely on the core code picking up your > > > > > > > pgoff mangling, which feels about as risky to me as the vma file > > > > > > > pointer wrangling - if it's not consistently applied the reverse map > > > > > > > is toast and unmap_mapping_range doesn't work correctly for our needs. > > > > > > I would think the pgoff has to be translated at the same time the > > > > > > vm->vm_file is changed? > > > > > > > > > > > > The owner of the dma_buf should have one virtual address space and FD, > > > > > > all its dma bufs should be linked to it, and all pgoffs translated to > > > > > > that space. > > > > > Yeah, that is exactly like amdgpu is doing it. > > > > > > > > > > Going to document that somehow when I'm done with TTM cleanups. > > > > BTW, while people are looking at this, is there a way to go from a VMA > > > > to a dma_buf that owns it? > > > Only a driver specific one. > > Sounds OK > > > > > For TTM drivers vma->vm_private_data points to the buffer object. Not sure > > > about the drivers using GEM only. > > Why are drivers in control of the vma? I would think dma_buf should be > > the vma owner. IIRC module lifetime correctness essentially hings on > > the module owner of the struct file > > Because the page fault handling is completely driver specific. > > We could install some DMA-buf vmops, but that would just be another layer of > redirection. If it is already taking a page fault I'm not sure the extra function call indirection is going to be a big deal. Having a uniform VMA sounds saner than every driver custom rolling something. When I unwound a similar mess in RDMA all the custom VMA stuff in the drivers turned out to be generally buggy, at least. Is vma->vm_file->private_data universally a dma_buf pointer at least? > > So, user VA -> find_vma -> dma_buf object -> dma_buf operations on the > > memory it represents > > Ah, yes we are already doing this in amdgpu as well. But only for DMA-bufs > or more generally buffers which are mmaped by this driver instance. So there is no general dma_buf service? That is a real bummer Jason _______________________________________________ dri-devel mailing list dri-devel@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/dri-devel