Hi, The previous discussion hasn't produced results, so let's start over. Here's the situation: - We currently have kernel and QEMU support for the QEMU vfio-pci display option. - The default for this option is 'auto', so the device will attempt to generate a display if the underlying device supports it, currently only GVTg and some future release of NVIDIA vGPU (plus Gerd's sample mdpy and mbochs). - The display option is implemented via two different mechanism, a vfio region (NVIDIA, mdpy) or a dma-buf (GVTg, mbochs). - Displays using dma-buf require OpenGL support, displays making use of region support do not. - Enabling OpenGL support requires specific VM configurations, which libvirt /may/ want to facilitate. - Probing display support for a given device is complicated by the fact that GVTg and NVIDIA both impose requirements on the process opening the device file descriptor through the vfio API: - GVTg requires a KVM association or will fail to allow the device to be opened. - NVIDIA requires that their vgpu-manager process can locate a UUID for the VM via the process commandline. - These are both horrible impositions and prevent libvirt from simply probing the device itself. The above has pressed the need for investigating some sort of alternative API through which libvirt might introspect a vfio device and with vfio device migration on the horizon, it's natural that some sort of support for migration state compatibility for the device need be considered as a second user of such an API. However, we currently have no concept of migration compatibility on a per-device level as there are no migratable devices that live outside of the QEMU code base. It's therefore assumed that per device migration compatibility is encompassed by the versioned machine type for the overall VM. We need participation all the way to the top of the VM management stack to resolve this issue and it's dragging down the (possibly) more simple question of how do we resolve the display situation. Therefore I'm looking for alternatives for display that work within what we have available to us at the moment. Erik Skultety, who initially raised the display question, has identified one possible solution, which is to simply make the display configuration the user's problem (apologies if I've misinterpreted Erik). I believe this would work something like: - libvirt identifies a version of QEMU that includes 'display' support for vfio-pci devices and defaults to adding display=off for every vfio-pci device [have we chosen the wrong default (auto) in QEMU?]. - New XML support would allow a user to enable display support on the vfio device. - Resolving any OpenGL dependencies of that change would be left to the user. A nice aspect of this is that policy decisions are left to the user and clearly no interface changes are necessary, perhaps with the exception of deciding whether we've made the wrong default choice for vfio-pci devices in QEMU. On the other hand, if we do want to give libvirt a mechanism to probe the display support for a device, we can make a simplified QEMU instance be the mechanism through which we do that. For example the script[1] can be provided with either a PCI device or sysfs path to an mdev device and run a minimal VM instance meeting the requirements of both GVTg and NVIDIA to report the display support and GL requirements for a device. There are clearly some unrefined and atrocious bits of this script, but it's only a proof of concept, the process management can be improved and we can decide whether we want to provide qmp mechanism to introspect the device rather than grep'ing error messages. The goal is simply to show that we could choose to embrace QEMU and use it not as a VM, but simply a tool for poking at a device given the restrictions the mdev vendor drivers have already imposed. So I think the question bounces back to libvirt, does libvirt want enough information about the display requirements for a given device to automatically attempt to add GL support for it, effectively a policy of 'if it's supported try to enable it', or should we leave well enough alone and let the user choose to enable it? Maybe some guiding questions: - Will dma-buf always require GL support? - Does GL support limit our ability to have a display over a remote connection? - Do region-based displays also work with GL support, even if not required? Furthermore, should QEMU vfio-pci flip the default to 'off' for compatibility? Thanks, Alex [1] https://gist.github.com/awilliam/2ccd31e85923ac8135694a7db2306646 -- libvir-list mailing list libvir-list@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/libvir-list