On Wed, Dec 21, 2016 at 03:30:04PM +0100, Michael Thayer wrote: > 21.12.2016 10:05, Daniel Vetter wrote: > > On Tue, Dec 20, 2016 at 11:38:52AM +0100, Michael Thayer wrote: > > > The suggested X and Y connector properties are intended as a way for drivers > > > for virtual machine GPUs to provide information about the layout of the > > > host system windows (or whatever) corresponding to given guest connectors. > > > The intention is for the guest system to lay out screens in the virtual > > > desktop in a way which reflects the host layout. Sometimes though the guest > > > system chooses not to follow those hints, usually due to user requests. In > > > this case it is useful to be able to pass information back about the actual > > > layout chosen. > > > > > > The immediate use case for this is host-to-guest pointer input mapping. > > > Qemu, VirtualBox and VMWare currently handle this by providing an emulated > > > graphics tablet device to the guest. libinput defaults, as did X.Org before > > > it used libinput, to mapping the position information reported by the device > > > to the smallest rectangle enclosing the screen layout. Knowing that layout > > > lets the hypervisor send the right position information through the input > > > device. > > > > > > Signed-off-by: Michael Thayer <michael.thayer@xxxxxxxxxx> > > > --- > > > Follow-up to thread "Passing multi-screen layout to KMS driver". In that > > > thread, Gerd suggested an alternative way of solving the use case, namely > > > emulating one input device per virtual screen, touchscreen-style. My reasons > > > for prefering this approach is that it is relatively uninvasive, and closer > > > to the way things are done now without (in my opinion) being ugly; and that > > > automatic touchscreen input to screen mapping is still not a solved problem. > > > I think that both are valid though. > > > > > > Both approaches require changes to the hypervisor and virtual hardware, and > > > to user-space consumers which would use the interface. I have checked the > > > mutter source and believe that the change required to support the interface > > > as implemented here would be minimal and intend to submit a patch if this > > > change is accepted. I think that the virtual hardware changes are likely to > > > be less invasive with this approach than with the other. This change will > > > though also require small drm driver changes once the virtual hardware has > > > been adjusted; currently to the qxl driver and to the out-of-tree vboxvideo > > > driver. It would certainly be nice to have in virtio-gpu. > > > > Makes sense I think, but for merging we need: > > - some driver to implement > > This is where it starts getting tricky. vboxvideo is out of tree. In > theory I could look at getting it merged, but that needs time I am rather > short of (I am the only person maintaining that driver and it is just one of > my responsibilities; and there are some bits there that are probably too > ugly to merge as is). I don't think I am really the person to be doing this > for qxl/virtio-gpu as that required adding the support to qemu too. I think > that they really should have it, but I would rather not be the one adding > it. So would our out-of-tree driver be good enough? I don't see the point in merging core code for out-of-tree drivers. If it's out-of-tree you can just add this locally (by adding the property). Has ofc the risk of uapi breakage or not upstream opting for a slightly different flavour, but that's the price for not being upstream. -Daniel -- Daniel Vetter Software Engineer, Intel Corporation http://blog.ffwll.ch _______________________________________________ dri-devel mailing list dri-devel@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/dri-devel