On 12 July 2017 at 17:39, Christian König <deathsimple at vodafone.de> wrote: > Am 11.07.2017 um 17:43 schrieb Jason Ekstrand: > > On Tue, Jul 11, 2017 at 12:17 AM, Christian König <deathsimple at vodafone.de> > wrote: >> >> [SNIP] >>>>> >>>>> If we ever want to share fences across processes (which we do), >>>>> then this needs to be sorted in the kernel. >>>> >>>> That would clearly get a NAK from my side, even Microsoft forbids >>>> wait before signal because you can easily end up in deadlock >>>> situations. >>>> >>>> Please don't NAK things that are required by the API specification and >>>> CTS tests. >>> >>> There is no requirement for every aspect of the Vulkan API specification >>> to be mirrored 1:1 in the kernel <-> userspace API. We have to work out >>> what makes sense at each level. >> >> >> Exactly, if we have a synchronization problem between two processes that >> should be solved in userspace. >> >> E.g. if process A hasn't submitted it's work to the kernel it should flush >> it's commands before sending a flip event to the compositor. > > > Ok, I think there is some confusion here on what is being proposed. Here > are some things that are *not* being proposed: > > 1. This does *not* allow a client to block another client's GPU work > indefinitely. This is entirely for a CPU wait API to allow for a "wait for > submit" as well as a "wait for finish". > > Yeah, that is a rather good point. > > 2. This is *not* for system compositors that need to be robust against > malicious clients. > > I can see the point, but I think the kernel interface should still be idiot > prove even in the unlikely case the universe suddenly stops to produce > idiots. > > The expected use for the OPAQUE_FD is two very tightly integrated processes > which trust each other but need to be able to share synchronization > primitives. > > Well, that raises a really important question: What is the actual use case > for this if not communication between client and compositor? VR clients and compositors. > Could we do this "in userspace"? Yes, with added kernel API. we would need > some way of strapping a second FD onto a syncobj or combining two FDs into > one to send across the wire or something like that, then add a shared memory > segment, and then pile on a bunch of code to do cross-process condition > variables and state tracking. I really don't see how that's a better > solution than adding a flag to the kernel API to just do what we want. > > Way to complicated. > > My thinking was rather to optionally allow a single page to be mmap()ed into > the process address space from the fd and then put a futex, pthread_cond or > X shared memory fence or anything like that into it. > Is that easier than just waiting in the kernel, I'm not sure how optimised we need this path to be. Dave.