Hi Gerd, Any further comments on this? Thanks, Vivek > > Hi Gerd, > > > > [Kasireddy, Vivek] Correct, that is exactly what I want -- make the > > > Guest wait until it gets notified that the Host is completely done processing/using the > fb. > > > However, there can be two resources the guest can be made to wait > > > on: wait for the new/current fb that is being submitted to be > > > processed (explicit flush) > > > > That would be wait on resource_flush case, right? > [Kasireddy, Vivek] Yes, correct. > > > > > > or wait for the previous fb that was submitted earlier (in the > > > previous repaint cycle) to be processed (explicit sync). > > > > That would be the wait on set_scanout case, right? > [Kasireddy, Vivek] Right. > > > > > And it would effectively wait on the previous fb not being needed by > > the host any more (because the page-flip to the new fb completed) so > > the guest can re-use the previous fb to render the next frame, right? > [Kasireddy, Vivek] Yup. > > > > > (also when doing front-buffer rendering with xorg/fbcon and then doing > > a virtual console switch the guest could wait for the console switch > > being completed). > > > > > IIUC, Explicit sync only makes sense if 1) the Host windowing system > > > also supports that feature/protocol (currently only upstream Weston > > > does but I'd like to add it to Mutter if no one else does) or if > > > there is a way to figure out (dma-buf sync file?) if the Host has > > > completely processed the fb and 2) if Qemu UI is not doing a blit and instead > submitting the guest fb/dmabuf directly to the Host windowing system. > > > As you are aware, 2) can possibly be done with dbus/pipewire Qemu UI > > > backends (I'll explore this soon) but not with GTK or SDL. > > > > Well, I think we need to clearly define the wait flag semantics. > [Kasireddy, Vivek] At-least with our passthrough use-case (maybe not with Virgl), I think > we need to ensure the following criteria: > 1) With Blobs, ensure that the Guest and Host would never use the dmabuf/FB at the same > time. > 2) The Guest should not render more frames than the refresh rate of the Host so that GPU > resources are not wasted. > > > Should resource_flush with wait flag wait until the host is done > > reading the resource (blit done)? > [Kasireddy, Vivek] I started with this but did not find it useful as it did not meet > 2) above. However, I think we could have a flag for this if the Guest is using a virtual > vblank/timer and only wants to wait until the blit is done. > > > Or should it wait until the host screen has been updated (gtk draw > > callback completed)? > [Kasireddy, Vivek] This is what the last 7 patches of my Blob series (v3) do. So, we'd want > to have a separate flag for this as well. And, lastly, we are going to need another flag for > the set_scanout case where we wait for the previous fb to be synchronized. > > > > > Everything else will be a host/guest implementation detail then, and > > of course this needs some integration with the UI on the host side and > > different UIs might have to do different things. > [Kasireddy, Vivek] Sure, I think we can start with GTK and go from there. > > > > > On the guest side integrating this with fences will give us enough > > flexibility on how we want handle the waits. Simplest would be to > > just block. > [Kasireddy, Vivek] I agree; simply blocking (dma_fence_wait) is more than enough for > most use-cases. > > >We could implement virtual vblanks, which would probably make most > >userspace work fine without explicit virtio-gpu support. If needed we > >could even give userspace access to the fence so it can choose how to > >wait. > [Kasireddy, Vivek] Virtual vblanks is not a bad idea but I think blocking with fences in the > Guest kernel space seems more simpler. And, sharing fences with the Guest compositor is > also very interesting but I suspect we might need to modify the compositor for this use- > case, which might be a non-starter. Lastly, even with virtual vblanks, we still need to make > sure that we meet the two criteria mentioned above. > > Thanks, > Vivek