Hi, > > That sounds sensible to me. Fence the virtio commands, make sure (on > > the host side) the command completes only when the work is actually done > > not only submitted. Has recently been added to qemu for RESOURCE_FLUSH > > (aka frontbuffer rendering) and doing the same for SET_SCANOUT (aka > > pageflipping), then send vblank events to userspace on command > > completion certainly makes sense. > > Hm how does this all work? At least drm/virtio uses > drm_atomic_helper_dirtyfb, so both DIRTYFB ioctl and atomic flips all end > up in the same driver path for everything. Or do you just combine the > resource_flush with the flip as needed and let the host side figure it all > out? From a quick read of virtgpu_plane.c that seems to be the case ... virtio_gpu_primary_plane_update() will send RESOURCE_FLUSH only for DIRTYFB and both SET_SCANOUT + RESOURCE_FLUSH for page-flip, and I think for the page-flip case the host (aka qemu) doesn't get the "wait until old framebuffer is not in use any more" right yet. So we'll need a host-side fix for that and a guest-side fix to switch from a blocking wait on the fence to vblank events. > Also to make this work we don't just need the fence, we need the timestamp > (in a clock domain the guest can correct for ofc) of the host side kms > driver flip completion. If you just have the fence then the jitter from > going through all the layers will most likely make it unusable. Well, there are no timestamps in the virtio-gpu protocol ... Also I'm not sure they would be that helpful, any timing is *much* less predictable in a virtual machine, especially in case the host machine is loaded. take care, Gerd