> > There are multiple ways synchronization can be achieved, > > fences/sync objects is one common approach, however we're > > presenting a different approach. Personally, I quite like > > fence sync objects, however we believe it requires a lot of > > userspace interfaces to be changed to pass around sync object > > handles. Our hope is that the kds approach will require less > > effort to make use of as no existing userspace interfaces need > > to be changed. E.g. To use explicit fences, the struct > > drm_mode_crtc_page_flip would need a new members to pass in the > > handle(s) of sync object(s) which the flip depends on (I.e. > > don't flip until these fences fire). The additional benefit of > > our approach is that it prevents userspace specifying dependency > > loops which can cause a deadlock (see kds.txt for an explanation > > of what I mean here). > > It is easy to cause cyclic dependencies with implicit fences unless you > are very sure that client can only cause linear implicit dependencies. I'm not sure I know what you mean by linear implicit dependencies? > But clients already have synchronization dependencies with userspace. > That makes implicit synchronization possibly cause unexpected > deadlocks. Again, not sure what you mean here? Do you mean that userspace can Submit a piece of work to a driver which depends on something else happening in userpsace? > Explicit synchronization is easier to debug because developer using > explicit synchronization can track the dependencies in userspace. But > of course that makes userspace API harder to use than API using > implicitly synchronization. > > But implicit synchronization can avoid client deadlock issues. > Providing if client may never block "fence" from triggering in finite > time when it is granted access. The page flip can be synchronized in > that manner if client can't block HW from processing queued rendering. Yes, I guess this is the critical point - this approach assumes that when a client starts using a resource, it will only do so for a finite amount of time. If userspace wanted to participate in the scheme, we would probably need some kind of timeout, otherwise userspace could prevent other devices from accessing a resource. > You were talking about adding new parameter to page flip ioctl. I fail > to see need for it because page flip already has fb object as parameter > that should map to the implicit synchronization fence through dma_buf. This is the point I was trying to make. With explicit fence objects you do have to add a new parameter, whereas with this kds implicit approach you do not - the buffer itself becomes the sync object. > > While KDS defines a very generic mechanism, I am proposing that > > this code or at least the concepts be merged with the existing > > > dma_buf code, so a the struct kds_resource members get moved to > > struct dma_buf, kds_* functions get renamed to dma_buf_* > > functions, etc. So I guess what I'm saying is please don't review > > the actual code just yet, only the concepts the code describes, > > where kds_resource == dma_duf. > > But the documented functionality sounds very much deadlock prone. If > userspace gets exclusive access and needs to wait for implicit access > synchronization. > > app A has access to buffer X > app B requests exclusive access to buffer X and blocks waiting for access > app A makes synchronous IPC call to app B > > I didn't read the actual code at all to figure out if that is possible > scenario. But it sounds like possible scenario based on documentation > talking EGL depending on exclusive access. The intention was to use this mechanism for synchronizing between drivers rather than between userspace processes, I think the userspace access is somewhat an afterthought which will probably need some more thought. In the example you give, app A making a synchronous IPC call to app B breaks the clients must guarantee they complete in a finite time, which in the case of userspace access could be enforced by a timeout. Though I would have thought there's a better way to handle this than just a timeout. Cheers, Tom _______________________________________________ dri-devel mailing list dri-devel@xxxxxxxxxxxxxxxxxxxxx http://lists.freedesktop.org/mailman/listinfo/dri-devel