Re: [Mesa-dev] [RFC] Linux Graphics Next: Explicit fences everywhere and no BO fences - initial proposal

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

On Tue, 20 Apr 2021 at 19:54, Daniel Vetter <daniel@xxxxxxxx> wrote:
So I can mostly get behind this, except it's _not_ going to be
dma_fence. That thing has horrendous internal ordering constraints
within the kernel, and the one thing that doesn't allow you is to make
a dma_fence depend upon a userspace fence.

But what we can do is use the same currently existing container
objects like drm_syncobj or sync_file (timeline syncobj would fit best
tbh), and stuff a userspace fence behind it. The only trouble is that
currently timeline syncobj implement vulkan's spec, which means if you
build a wait-before-signal deadlock, you'll wait forever. Well until
the user ragequits and kills your process.

So for winsys we'd need to be able to specify the wait timeout
somewhere for waiting for that dma_fence to materialize (plus the
submit thread, but userspace needs that anyway to support timeline
syncobj) if you're importing an untrusted timeline syncobj. And I
think that's roughly it.
 
Right. The only way you get to materialise a dma_fence from an execbuf is that you take a hard timeout, with a penalty for not meeting that timeout. When I say dma_fence I mean dma_fence, because there is no extant winsys support for drm_symcobj, so this is greenfield: the winsys gets to specify its terms of engagement, and again, we've been the orange/green-site enemies of users for quite some time already, so we're happy to continue doing so. If the actual underlying primitive is not a dma_fence, and compositors/protocol/clients need to eat a bunch of typing to deal with a different primitive which offers the same guarantees, then that's fine, as long as there is some tangible whole-of-system benefit.

How that timeout is actually realised is an implementation detail. Whether it's a property of the last GPU job itself that the CPU-side driver can observe, or that the kernel driver guarantees that there is a GPU job launched in parallel which monitors the memory-fence status and reports back through a mailbox/doorbell, or the CPU-side driver enqueues kqueue work for $n milliseconds' time to check the value in memory and kill the context if it doesn't meet expectations - whatever. I don't believe any of those choices meaningfully impact on kernel driver complexity relative to the initial proposal, but they do allow us to continue to provide the guarantees we do today when buffers cross security boundaries.

There might well be an argument for significantly weakening those security boundaries and shifting the complexity from the DRM scheduler into userspace compositors. So far though, I have yet to see that argument made coherently.

Cheers,
Daniel
_______________________________________________
dri-devel mailing list
dri-devel@xxxxxxxxxxxxxxxxxxxxx
https://lists.freedesktop.org/mailman/listinfo/dri-devel

[Index of Archives]     [Linux DRI Users]     [Linux Intel Graphics]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [XFree86]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux