> GL and GLES are not relevant. What is relevant is EGL, which defines > interfaces to make things work on the native platform. Yes and no. This is what EGL spec says about sharing a texture between contexts: "OpenGL and OpenGL ES makes no attempt to synchronize access to texture objects. If a texture object is bound to more than one context, then it is up to the programmer to ensure that the contents of the object are not being changed via one context while another context is using the texture object for rendering. The results of changing a texture object while another context is using it are undefined." There are similar statements with regards to the lack of synchronisation guarantees for EGL images or between GL and native rendering, etc. But the main thing here is that EGL and Vulkan differ significantly. The eglSwapBuffers() is expected to post an unspecified "back buffer" to the display system using some internal driver magic. EGL driver is then expected to obtain another back buffer at some unspecified point in the future. Vulkan on the other hand is very specific and explicit. The vkQueuePresentKHR() is expected to post a specific vkImage with an explicit set of set of semaphores. Another image is obtained through vkAcquireNextImageKHR() and it's the application's decision whether it wants a fence, a semaphore, both or none with the acquired buffer. The implicit synchronisation doesn't mix well with Vulkan drivers and requires a lot of extra plumbing in the WSI code. > If you are using EGL_WL_bind_wayland_display, then one of the things > it is explicitly allowed/expected to do is to create a Wayland > protocol interface between client and compositor, which can be used to > pass buffer handles and metadata in a platform-specific way. Adding > synchronisation is also possible. Only one-way synchronisation is possible with this mechanism. There's a standard protocol for recycling buffers - wl_buffer_release() so buffer hand-over from the compositor to client remains unsynchronised - see below. > > The most troublesome part was Wayland buffer release mechanism, as it only involves a CPU signalling over Wayland IPC, without any 3D driver involvement. The choices were: explicit synchronisation extension or a buffer copy in the compositor (i.e. compositor textures from the copy, so the client can re-write the original), or some implicit synchronisation in kernel space (but that wasn't an option in Broadcom driver). > > You can add your own explicit synchronisation extension. I could but that requires implementing in in the driver and in a number of compositors, therefore a standard extension zwp_linux_explicit_synchronization_v1 is much better choice here than a custom one. > In every cross-process and cross-subsystem usecase, synchronisation is > obviously required. The two options for this are to implement kernel > support for implicit synchronisation (as everyone else has done), That would require major changes in driver architecture or a 2nd mechanisms doing the same thing but in kernel space - both are non-starters. > or implement generic support for explicit synchronisation (as we have > been working on with implementations inside Weston and Exosphere at > least), The zwp_linux_explicit_synchronization_v1 is a good step forward. I'm using this extension as a main synchronisation mechanism in EGL and Vulkan driver whenever available. I remember that Gustavo Padovan was working on explicit sync support in the display system some time ago. I hope it got merged into kernel by now, but I don't know to what extend it's actually being used. > or implement private support for explicit synchronisation, If everything else fails, that would be the last resort scenario, but far from ideal and very costly in terms of implementation and maintenance as it would require maintaining custom patches for various 3rd party components or littering them with multiple custom explicit synchronisation schemes. > or do nothing and then be surprised at the lack of synchronisation. Thank you, but no, thank you :) Cheers, Tomek