> > > I tried a few years back (spice-protocol 0.12.10) to move these code > > generation scripts to spice-protocol, but this did not work out > > nicely so this was reverted. > > I found this too, I opted to use the spice.proto file to write my own > header > with the basics that were required. > > > > > I agree that having the initial link messages in spice/protocol.h, but > > not the rest of the protocol in spice-protocol is unexpected. For now, > > I > > would suggest that you copy this messages.h file as you suggested, even > > if that's suboptimal. I don't expect that file to change that often. > > > > Can you give more details about the project you are working on? > > Sure :). I spend most of my time in a Linux environment for work but > enjoy the odd game, so I figured I would give a Windows VM a go with VGA > passthrough. It works a treat but the issue is getting a video feed back > to the host, the solutions that are available IMO are substandard > (ShadowPlay or Steam InHouse Streaming) as I want to have full control > over the host as well, not just in games. > > These services compress to h264 and stream it over the network, where it > is decoded for display. Since this is all on the local host, I figured > all those overheads could be bypassed by writing a windows driver that > allows the host to share some memory mapped ram with the guest. Then on > the guest I wrote an application that uses NvFBC to capture raw frames > of the desktop and copy them into the shared memory segment. > > This is why I needed a custom client, I am not using spice for the video > feed, it's just there for mouse and keyboard input. The end result is > outstanding, the latency is near zero and the video quality is lossless. Maybe a silly question but if it's all local, don't you have a physical monitor connected? NcFBC means you passed an Nvidia card to the VM. Or basically you have 2 cards, one Nvidia and one Intel but you want the output on the Intel one as, for instance, you don't have a monitor attached to the Nvidia (this can happens on some laptops) ? In theory you could use a QXL device for memory sharing, copying the output to its memory and send draw commands to get the output. Unfortunately NvFBC system memory API handle the system buffer so this means that you'll probably have 3 copy: - Nvidia GPU -> system (NvFBC) - system -> shared - shared -> host GPU But probably is not a problem for you (you possibly can avoid a copy using CUDA or GL NvFBC interface). Frediano _______________________________________________ Spice-devel mailing list Spice-devel@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/spice-devel