> Hi, > > > Yes you want to use EGL here, I think we could probably put more code in > > qemu > > to help with this case. > > Sure, if anything is needed we'll get that sorted ;) > > I suspect spice-server needs access to the gl context helpers > (dpy_gl_ctx_*) if it wants use opengl. > I would prefer some more low level (like EGLDisplay, EGLContext or EGLSurface), I'll do some more digging, not clear what a possible interface should be, surely having duplicate EGL initialization (Qemu and spice-server) does not look a good thing. In the meantime I was trying gstreamer-vaapi (thanks to Christophe) and did some more digging. Gstreamer seems to suppose that a dmabuf is mmap-able. It's not. Actually mmap for i915 GEM primes is implemented starting with kernel 4.6. However I have 4.5.7 and also mmap returns failure on some i915 implementation due to cache coherence issues. So we must support mmap failures in any cases. Looks like dmabuf works quite well for kernel and devices but have some problems with user space. I enabled vaapi usage in spice-server gstreamer code (had to install some packages like gstreamer1-vaapi and libva-intel-driver) and program became unstable with apparent memory corruption (still have to understand the cause). Looks like vaapi can receive dmabufs but looks like this feature is not used by gstreamer which is using mmap on dmabufs. About vaapi implementation looks like Intel if quite good but for other cards is not that great. Particularly for Nvidia the plugin is an adapter for vdpau which does decoding, not encoding. Encoding for Nvidia is done with NvENC (which looks hard to install on Linux). > Possibly it also wasn't the most clever way to pass on a dmabuf > filehandle to spice-server. If spice-server needs ask qemu for a gl > context anyway we might pass around the texture id, then let > spice-server figure whenever it wants export it as dma-buf for a local > spice-client or do something else with the texture. > I was thinking of using glGetnTexImage, I don't know if it's a good choice. Of course the better thing would be to pass the buffer to the physical card and get the stream (which is why we are looking at vaapi). > I think at the end of the day it boils down to what you need for the > vaapi interface. If vaapi is able to accept dmabufs we should be fine > with the current setup. > > > You'd probably want to talk to Gerd to find the nice/proper way to do > > things, > > but it most definitely involves egl, and most likely readpixels into a PBO. > > cheers, > Gerd > > What scare me a bit is the network flow. Streaming all screen take a lot of bandwidth and the results are quite bad. First you get a lag which is terrible, trying a game make this clear. One of the problem is that for different reason the buffer between client and server can became so big to contain seconds of streaming and the delay became big. Some bandwidth throttling should be done. Looks like also that our code that detects the bandwidth is not that great. Sometimes gstreamer bandwidth limitation kicks in but it reduce so much the quality to make stream barely understandable, possibly would be better to decrease the frame rate instead. Another problem I got (I should open a bug) is that Qemu with Virgl wake up spice 40/50 times a seconds (calling spice_qxl_wakeup) for no apparent reason. Frediano _______________________________________________ Spice-devel mailing list Spice-devel@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/spice-devel