On Thu, Jun 17, 2021 at 09:37:36AM +0200, Christian König wrote: > Am 16.06.21 um 20:30 schrieb Jason Ekstrand: > > On Tue, Jun 15, 2021 at 3:41 AM Christian König > > <ckoenig.leichtzumerken@xxxxxxxxx> wrote: > > > Hi Jason & Daniel, > > > > > > maybe I should explain once more where the problem with this approach is > > > and why I think we need to get that fixed before we can do something > > > like this here. > > > > > > To summarize what this patch here does is that it copies the exclusive > > > fence and/or the shared fences into a sync_file. This alone is totally > > > unproblematic. > > > > > > The problem is what this implies. When you need to copy the exclusive > > > fence to a sync_file then this means that the driver is at some point > > > ignoring the exclusive fence on a buffer object. > > Not necessarily. Part of the point of this is to allow for CPU waits > > on a past point in buffers timeline. Today, we have poll() and > > GEM_WAIT both of which wait for the buffer to be idle from whatever > > GPU work is currently happening. We want to wait on something in the > > past and ignore anything happening now. > > Good point, yes that is indeed a valid use case. > > > But, to the broader point, maybe? I'm a little fuzzy on exactly where > > i915 inserts and/or depends on fences. > > > > > When you combine that with complex drivers which use TTM and buffer > > > moves underneath you can construct an information leak using this and > > > give userspace access to memory which is allocated to the driver, but > > > not yet initialized. > > > > > > This way you can leak things like page tables, passwords, kernel data > > > etc... in large amounts to userspace and is an absolutely no-go for > > > security. > > Ugh... Unfortunately, I'm really out of my depth on the implications > > going on here but I think I see your point. > > > > > That's why I'm said we need to get this fixed before we upstream this > > > patch set here and especially the driver change which is using that. > > Well, i915 has had uAPI for a while to ignore fences. > > Yeah, exactly that's illegal. You're a few years too late with closing that barn door. The following drives have this concept - i915 - msm - etnaviv Because you can't write a competent vulkan driver without this. This was discussed at absolute epic length in various xdcs iirc. We did ignore a bit the vram/ttm/bo-moving problem because all the people present were hacking on integrated gpu (see list above), but that just means we need to treat the ttm_bo->moving fence properly. > At least the kernel internal fences like moving or clearing a buffer object > needs to be taken into account before a driver is allowed to access a > buffer. Yes i915 needs to make sure it never ignores ttm_bo->moving. For dma-buf this isn't actually a problem, because dma-buf are pinned. You can't move them while other drivers are using them, hence there's not actually a ttm_bo->moving fence we can ignore. p2p dma-buf aka dynamic dma-buf is a different beast, and i915 (and fwiw these other drivers) need to change before they can do dynamic dma-buf. > Otherwise we have an information leak worth a CVE and that is certainly not > something we want. Because yes otherwise we get a CVE. But right now I don't think we have one. We do have a quite big confusion on what exactly the signaling ordering is supposed to be between exclusive and the collective set of shared fences, and there's some unifying that needs to happen here. But I think what Jason implements here in the import ioctl is the most defensive version possible, so really can't break any driver. It really works like you have an ad-hoc gpu engine that does nothing itself, but waits for the current exclusive fence and then sets the exclusive fence with its "CS" completion fence. That's imo perfectly legit use-case. Same for the export one. Waiting for a previous snapshot of implicit fences is imo perfectly ok use-case and useful for compositors - client might soon start more rendering, and on some drivers that always results in the exclusive slot being set, so if you dont take a snapshot you oversync real bad for your atomic flip. > > Those changes are years in the past. If we have a real problem here (not sure on > > that yet), then we'll have to figure out how to fix it without nuking > > uAPI. > > Well, that was the basic idea of attaching flags to the fences in the > dma_resv object. > > In other words you clearly denote when you have to wait for a fence before > accessing a buffer or you cause a security issue. Replied somewhere else, and I do kinda like the flag idea. But the problem is we first need a ton more encapsulation and review of drivers before we can change the internals. One thing at a time. And yes for amdgpu this gets triple-hard because you both have the ttm_bo->moving fence _and_ the current uapi of using fence ownership _and_ you need to figure out how to support vulkan properly with true opt-in fencing. I'm pretty sure it's doable, I'm just not finding any time anywhere to hack on these patches - too many other fires :-( Cheers, Daniel > > Christian. > > > > > --Jason > > > > > > > Regards, > > > Christian. > > > > > > Am 10.06.21 um 23:09 schrieb Jason Ekstrand: > > > > Modern userspace APIs like Vulkan are built on an explicit > > > > synchronization model. This doesn't always play nicely with the > > > > implicit synchronization used in the kernel and assumed by X11 and > > > > Wayland. The client -> compositor half of the synchronization isn't too > > > > bad, at least on intel, because we can control whether or not i915 > > > > synchronizes on the buffer and whether or not it's considered written. > > > > > > > > The harder part is the compositor -> client synchronization when we get > > > > the buffer back from the compositor. We're required to be able to > > > > provide the client with a VkSemaphore and VkFence representing the point > > > > in time where the window system (compositor and/or display) finished > > > > using the buffer. With current APIs, it's very hard to do this in such > > > > a way that we don't get confused by the Vulkan driver's access of the > > > > buffer. In particular, once we tell the kernel that we're rendering to > > > > the buffer again, any CPU waits on the buffer or GPU dependencies will > > > > wait on some of the client rendering and not just the compositor. > > > > > > > > This new IOCTL solves this problem by allowing us to get a snapshot of > > > > the implicit synchronization state of a given dma-buf in the form of a > > > > sync file. It's effectively the same as a poll() or I915_GEM_WAIT only, > > > > instead of CPU waiting directly, it encapsulates the wait operation, at > > > > the current moment in time, in a sync_file so we can check/wait on it > > > > later. As long as the Vulkan driver does the sync_file export from the > > > > dma-buf before we re-introduce it for rendering, it will only contain > > > > fences from the compositor or display. This allows to accurately turn > > > > it into a VkFence or VkSemaphore without any over- synchronization. > > > > > > > > This patch series actually contains two new ioctls. There is the export > > > > one mentioned above as well as an RFC for an import ioctl which provides > > > > the other half. The intention is to land the export ioctl since it seems > > > > like there's no real disagreement on that one. The import ioctl, however, > > > > has a lot of debate around it so it's intended to be RFC-only for now. > > > > > > > > Mesa MR: https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgitlab.freedesktop.org%2Fmesa%2Fmesa%2F-%2Fmerge_requests%2F4037&data=04%7C01%7Cchristian.koenig%40amd.com%7Cb094e69c94814727939508d930f4ca94%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637594650220923783%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=xUwaiuw8Qt3d37%2F6NYOHU3K%2FMFwsvg79rno9zTNodRs%3D&reserved=0 > > > > IGT tests: https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fpatchwork.freedesktop.org%2Fseries%2F90490%2F&data=04%7C01%7Cchristian.koenig%40amd.com%7Cb094e69c94814727939508d930f4ca94%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637594650220923783%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=wygYaeVg%2BXmfeEUC45lWH5GgNBukl0%2B%2FMpT5u9LKYDI%3D&reserved=0 > > > > > > > > v10 (Jason Ekstrand, Daniel Vetter): > > > > - Add reviews/acks > > > > - Add a patch to rename _rcu to _unlocked > > > > - Split things better so import is clearly RFC status > > > > > > > > v11 (Daniel Vetter): > > > > - Add more CCs to try and get maintainers > > > > - Add a patch to document DMA_BUF_IOCTL_SYNC > > > > - Generally better docs > > > > - Use separate structs for import/export (easier to document) > > > > - Fix an issue in the import patch > > > > > > > > v12 (Daniel Vetter): > > > > - Better docs for DMA_BUF_IOCTL_SYNC > > > > > > > > v12 (Christian König): > > > > - Drop the rename patch in favor of Christian's series > > > > - Add a comment to the commit message for the dma-buf sync_file export > > > > ioctl saying why we made it an ioctl on dma-buf > > > > > > > > Cc: Christian König <christian.koenig@xxxxxxx> > > > > Cc: Michel Dänzer <michel@xxxxxxxxxxx> > > > > Cc: Dave Airlie <airlied@xxxxxxxxxx> > > > > Cc: Bas Nieuwenhuizen <bas@xxxxxxxxxxxxxxxxxxx> > > > > Cc: Daniel Stone <daniels@xxxxxxxxxxxxx> > > > > Cc: mesa-dev@xxxxxxxxxxxxxxxxxxxxx > > > > Cc: wayland-devel@xxxxxxxxxxxxxxxxxxxxx > > > > Test-with: 20210524205225.872316-1-jason@xxxxxxxxxxxxxx > > > > > > > > Christian König (1): > > > > dma-buf: Add dma_fence_array_for_each (v2) > > > > > > > > Jason Ekstrand (5): > > > > dma-buf: Add dma_resv_get_singleton (v6) > > > > dma-buf: Document DMA_BUF_IOCTL_SYNC (v2) > > > > dma-buf: Add an API for exporting sync files (v12) > > > > RFC: dma-buf: Add an extra fence to dma_resv_get_singleton_unlocked > > > > RFC: dma-buf: Add an API for importing sync files (v7) > > > > > > > > Documentation/driver-api/dma-buf.rst | 8 ++ > > > > drivers/dma-buf/dma-buf.c | 103 +++++++++++++++++++++++++ > > > > drivers/dma-buf/dma-fence-array.c | 27 +++++++ > > > > drivers/dma-buf/dma-resv.c | 110 +++++++++++++++++++++++++++ > > > > include/linux/dma-fence-array.h | 17 +++++ > > > > include/linux/dma-resv.h | 2 + > > > > include/uapi/linux/dma-buf.h | 103 ++++++++++++++++++++++++- > > > > 7 files changed, 369 insertions(+), 1 deletion(-) > > > > > > _______________________________________________ > mesa-dev mailing list > mesa-dev@xxxxxxxxxxxxxxxxxxxxx > https://lists.freedesktop.org/mailman/listinfo/mesa-dev -- Daniel Vetter Software Engineer, Intel Corporation http://blog.ffwll.ch