Am 17.06.21 um 21:58 schrieb Daniel Vetter:
On Thu, Jun 17, 2021 at 09:37:36AM +0200, Christian König wrote:
[SNIP]
But, to the broader point, maybe? I'm a little fuzzy on exactly where
i915 inserts and/or depends on fences.
When you combine that with complex drivers which use TTM and buffer
moves underneath you can construct an information leak using this and
give userspace access to memory which is allocated to the driver, but
not yet initialized.
This way you can leak things like page tables, passwords, kernel data
etc... in large amounts to userspace and is an absolutely no-go for
security.
Ugh... Unfortunately, I'm really out of my depth on the implications
going on here but I think I see your point.
That's why I'm said we need to get this fixed before we upstream this
patch set here and especially the driver change which is using that.
Well, i915 has had uAPI for a while to ignore fences.
Yeah, exactly that's illegal.
You're a few years too late with closing that barn door. The following
drives have this concept
- i915
- msm
- etnaviv
Because you can't write a competent vulkan driver without this.
WHAT? ^^
This was discussed at absolute epic length in various xdcs iirc. We did ignore a
bit the vram/ttm/bo-moving problem because all the people present were
hacking on integrated gpu (see list above), but that just means we need to
treat the ttm_bo->moving fence properly.
I should have visited more XDCs in the past, the problem is much larger
than this.
But I now start to understand what you are doing with that design and
why it looks so messy to me, amdgpu is just currently the only driver
which does Vulkan and complex memory management at the same time.
At least the kernel internal fences like moving or clearing a buffer object
needs to be taken into account before a driver is allowed to access a
buffer.
Yes i915 needs to make sure it never ignores ttm_bo->moving.
No, that is only the tip of the iceberg. See TTM for example also puts
fences which drivers needs to wait for into the shared slots. Same thing
for use cases like clear on release etc....
From my point of view the main purpose of the dma_resv object is to
serve memory management, synchronization for command submission is just
a secondary use case.
And that drivers choose to ignore the exclusive fence is an absolutely
no-go from a memory management and security point of view. Exclusive
access means exclusive access. Ignoring that won't work.
The only thing which saved us so far is the fact that drivers doing this
are not that complex.
BTW: How does it even work? I mean then you would run into the same
problem as amdgpu with its page table update fences, e.g. that your
shared fences might signal before the exclusive one.
For dma-buf this isn't actually a problem, because dma-buf are pinned. You
can't move them while other drivers are using them, hence there's not
actually a ttm_bo->moving fence we can ignore.
p2p dma-buf aka dynamic dma-buf is a different beast, and i915 (and fwiw
these other drivers) need to change before they can do dynamic dma-buf.
Otherwise we have an information leak worth a CVE and that is certainly not
something we want.
Because yes otherwise we get a CVE. But right now I don't think we have
one.
Yeah, agree. But this is just because of coincident and not because of
good engineering :)
We do have a quite big confusion on what exactly the signaling ordering is
supposed to be between exclusive and the collective set of shared fences,
and there's some unifying that needs to happen here. But I think what
Jason implements here in the import ioctl is the most defensive version
possible, so really can't break any driver. It really works like you have
an ad-hoc gpu engine that does nothing itself, but waits for the current
exclusive fence and then sets the exclusive fence with its "CS" completion
fence.
That's imo perfectly legit use-case.
The use case is certainly legit, but I'm not sure if merging this at the
moment is a good idea.
Your note that drivers are already ignoring the exclusive fence in the
dma_resv object was eye opening to me. And I now have the very strong
feeling that the synchronization and the design of the dma_resv object
is even more messy then I thought it is.
To summarize we can be really lucky that it didn't blow up into our
faces already.
Same for the export one. Waiting for a previous snapshot of implicit
fences is imo perfectly ok use-case and useful for compositors - client
might soon start more rendering, and on some drivers that always results
in the exclusive slot being set, so if you dont take a snapshot you
oversync real bad for your atomic flip.
The export use case is unproblematic as far as I can see.
Those changes are years in the past. If we have a real problem here (not sure on
that yet), then we'll have to figure out how to fix it without nuking
uAPI.
Well, that was the basic idea of attaching flags to the fences in the
dma_resv object.
In other words you clearly denote when you have to wait for a fence before
accessing a buffer or you cause a security issue.
Replied somewhere else, and I do kinda like the flag idea. But the problem
is we first need a ton more encapsulation and review of drivers before we
can change the internals. One thing at a time.
Ok how should we then proceed?
The large patch set I've send out to convert all users of the shared
fence list to a for_each API is a step into the right direction I think,
but there is still a bit more todo.
And yes for amdgpu this gets triple-hard because you both have the
ttm_bo->moving fence _and_ the current uapi of using fence ownership _and_
you need to figure out how to support vulkan properly with true opt-in
fencing.
Well I have been pondering on that for a bit and I came to the
conclusion that it is actually not a problem at all.
See radeon, nouveau, msm etc... all implement functions that they don't
wait for fences from the same timeline, context, engine. That amdgpu
doesn't wait for fences from the same process can be seen as just a
special case of this.
I'm pretty sure it's doable, I'm just not finding any time
anywhere to hack on these patches - too many other fires :-(
Well I'm here. Let's just agree on the direction and I can do the coding.
What I need help with is all the auditing. For example I still haven't
wrapped my head around how i915 does the synchronization.
Best regards,
Christian.
Cheers, Daniel
Christian.
--Jason
Regards,
Christian.
Am 10.06.21 um 23:09 schrieb Jason Ekstrand:
Modern userspace APIs like Vulkan are built on an explicit
synchronization model. This doesn't always play nicely with the
implicit synchronization used in the kernel and assumed by X11 and
Wayland. The client -> compositor half of the synchronization isn't too
bad, at least on intel, because we can control whether or not i915
synchronizes on the buffer and whether or not it's considered written.
The harder part is the compositor -> client synchronization when we get
the buffer back from the compositor. We're required to be able to
provide the client with a VkSemaphore and VkFence representing the point
in time where the window system (compositor and/or display) finished
using the buffer. With current APIs, it's very hard to do this in such
a way that we don't get confused by the Vulkan driver's access of the
buffer. In particular, once we tell the kernel that we're rendering to
the buffer again, any CPU waits on the buffer or GPU dependencies will
wait on some of the client rendering and not just the compositor.
This new IOCTL solves this problem by allowing us to get a snapshot of
the implicit synchronization state of a given dma-buf in the form of a
sync file. It's effectively the same as a poll() or I915_GEM_WAIT only,
instead of CPU waiting directly, it encapsulates the wait operation, at
the current moment in time, in a sync_file so we can check/wait on it
later. As long as the Vulkan driver does the sync_file export from the
dma-buf before we re-introduce it for rendering, it will only contain
fences from the compositor or display. This allows to accurately turn
it into a VkFence or VkSemaphore without any over- synchronization.
This patch series actually contains two new ioctls. There is the export
one mentioned above as well as an RFC for an import ioctl which provides
the other half. The intention is to land the export ioctl since it seems
like there's no real disagreement on that one. The import ioctl, however,
has a lot of debate around it so it's intended to be RFC-only for now.
Mesa MR: https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgitlab.freedesktop.org%2Fmesa%2Fmesa%2F-%2Fmerge_requests%2F4037&data=04%7C01%7Cchristian.koenig%40amd.com%7Cf8e28d7c4683432bf24008d931ca5a63%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637595567453821101%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=TAFUHFCSFcfrP7bjkBtVin4VX2vC6OakwbrqwlZOW8c%3D&reserved=0
IGT tests: https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fpatchwork.freedesktop.org%2Fseries%2F90490%2F&data=04%7C01%7Cchristian.koenig%40amd.com%7Cf8e28d7c4683432bf24008d931ca5a63%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637595567453821101%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=lbn%2B81KXds9pYnFUYWi9hzLNP3PLKij4RVjV97UyZ3s%3D&reserved=0
v10 (Jason Ekstrand, Daniel Vetter):
- Add reviews/acks
- Add a patch to rename _rcu to _unlocked
- Split things better so import is clearly RFC status
v11 (Daniel Vetter):
- Add more CCs to try and get maintainers
- Add a patch to document DMA_BUF_IOCTL_SYNC
- Generally better docs
- Use separate structs for import/export (easier to document)
- Fix an issue in the import patch
v12 (Daniel Vetter):
- Better docs for DMA_BUF_IOCTL_SYNC
v12 (Christian König):
- Drop the rename patch in favor of Christian's series
- Add a comment to the commit message for the dma-buf sync_file export
ioctl saying why we made it an ioctl on dma-buf
Cc: Christian König <christian.koenig@xxxxxxx>
Cc: Michel Dänzer <michel@xxxxxxxxxxx>
Cc: Dave Airlie <airlied@xxxxxxxxxx>
Cc: Bas Nieuwenhuizen <bas@xxxxxxxxxxxxxxxxxxx>
Cc: Daniel Stone <daniels@xxxxxxxxxxxxx>
Cc: mesa-dev@xxxxxxxxxxxxxxxxxxxxx
Cc: wayland-devel@xxxxxxxxxxxxxxxxxxxxx
Test-with: 20210524205225.872316-1-jason@xxxxxxxxxxxxxx
Christian König (1):
dma-buf: Add dma_fence_array_for_each (v2)
Jason Ekstrand (5):
dma-buf: Add dma_resv_get_singleton (v6)
dma-buf: Document DMA_BUF_IOCTL_SYNC (v2)
dma-buf: Add an API for exporting sync files (v12)
RFC: dma-buf: Add an extra fence to dma_resv_get_singleton_unlocked
RFC: dma-buf: Add an API for importing sync files (v7)
Documentation/driver-api/dma-buf.rst | 8 ++
drivers/dma-buf/dma-buf.c | 103 +++++++++++++++++++++++++
drivers/dma-buf/dma-fence-array.c | 27 +++++++
drivers/dma-buf/dma-resv.c | 110 +++++++++++++++++++++++++++
include/linux/dma-fence-array.h | 17 +++++
include/linux/dma-resv.h | 2 +
include/uapi/linux/dma-buf.h | 103 ++++++++++++++++++++++++-
7 files changed, 369 insertions(+), 1 deletion(-)
_______________________________________________
mesa-dev mailing list
mesa-dev@xxxxxxxxxxxxxxxxxxxxx
https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Fmesa-dev&data=04%7C01%7Cchristian.koenig%40amd.com%7Cf8e28d7c4683432bf24008d931ca5a63%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637595567453821101%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=QxQDoUKzo57tmQxD0aEjPs8ATpCOBQiQ5W%2FDh8dbEqU%3D&reserved=0