Re: [PATCH v2 spice-protocol 2/2] Add unix GL scanout messages

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> Hi
> 
> ----- Original Message -----
> > On Fr, 2015-12-18 at 03:47 -0500, Marc-André Lureau wrote:
> > > Hi
> > > 
> > > ----- Original Message -----
> > > > > 
> > > > > If you create a "primary" surface with surface_create, that's
> > > > > basically all you need to start displaying. It could quite easily
> > > > > learn to take a shm fd while keeping the rest of the Spice
> > > > > qxl/2od/canvas semantic.
> > > > 
> > > > But then you still stream all the qxl commands and image data over the
> > > > socket, so you don't save much compared to non-shm case.
> > > 
> > > I mean you could just share a QXL primary surface, display it and not
> > > draw
> > > into it.
> > 
> > Which VGA?  Workflow is quite different on qxl and anything else.
> > With qxl we'll go send the guests qxl render commands over to the
> > client, and the client renders it.  On the qemu/server side there is no
> > rendered primary surface, unless someone asks for it and we kick the
> > local renderer on the server side to satisfy the request.  Sharing the
> > primary surface doesn't buy us much here IMO.
> 
> 
> ..Yes (I know), this is all hypotetical, based on discussion from Frediano to
> extend scanout to shm/canvas. fwiw, the server does all the rendering at
> some point or another, so imho, it would be still a good idea to simply
> share the server surface...
>  

Sorry, I should detail a bit more my idea.
In this case we are speaking about optimization in the local case (Qemu/spice-server
and client running on the same machine). The idea is memory sharing (where memory
is a generic term so sharing GL context/dma buffer is also memory sharing).
The idea is so to share the QXL frame buffer with client.
The protocol can be the same of the GL case, actually being the frame buffer format
being used by QXL very standard I think that GL buffer can be used for the same
purpose. The problem is that the frame buffer is allocate into the virtual card
memory by the guest (the guest pass the memory range to use to spice-server).
So in case or a dma buffer all card BARs should be in different dma buffers or
there should be a way (using Qemu) to share the memory range given by the guest
using a dma buffer (kind of remapping the memory into a new allocated dma buffer so
it's transparent for the guest).
The offset parameter came into this as dma buffer allocate by Qemu could not be page
aligned with the memory given by the guest.

> > With stdvga/cirrus/virtio-vga(in-2d-mode) the guest renders into a
> > framebuffer.  qemu creates a primary surface and sends over updates
> > (simple image blits), using dirty page tracking on the guest frame
> > buffer, pretty much like vnc.  *Those* updates can easily be dropped by
> > placing the primary surface into shared memory instead.  And it would be
> > very simliar to the opengl mode, qemu would copy from guest memory to
> > shared memory then instead of copying from guest memory to (dma-buf
> > exported) opengl texture.
> 
> again, that's for the people who don't know how this stuff work, I have a
> fair bit of experience with all that. But thanks anyway ;)

Frediano
_______________________________________________
Spice-devel mailing list
Spice-devel@xxxxxxxxxxxxxxxxxxxxx
http://lists.freedesktop.org/mailman/listinfo/spice-devel




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [ECOS]     [Asterisk Internet PBX]     [Linux API]     [Monitors]