On 28/10/10 21:14, Anthony Liguori wrote: >> If this code was invasive to qemus core, I'd say 'no way' but its just >> not. and as the GL device is versioned, we can keep using it even if >> the passthrough is replaced by a virtual GPU. > > The virtio-gl implementation is basically duplicating virtio-serial. It > looks like ti creates a totally separate window for the GL session. In > the current form, is there really any advantage to having the code in > QEMU? It could just as easily live outside of QEMU. you could say much the same about any driver in qemu... you could serialise up the registers and ship the data off to be processed on another PC if you wanted to... The code does not, however, create a seperate window for the GL session. the GL scene is rendered offscreen, and then piped back to the guest for display, so that it is fully composited into the guests graphical environment. From a user perspective, its as if the guest had hardware 3D. Performance is very reasonable, around 40fps in ioquake3 on modest (host) hardware. In theory, the code *could* use a serial transport and render in a seperate process, but then that would make it much harder to evolve it into a GPU-like system in future. -Ian _______________________________________________ Virtualization mailing list Virtualization@xxxxxxxxxxxxxxxxxxxxxxxxxx https://lists.linux-foundation.org/mailman/listinfo/virtualization