Re: RFC: Integrating Virgil and Spice

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



  Hi,

> The basic idea is to use qemu's console layer (include/ui/console.h)
> as an abstraction between the new virtio-vga device Dave has in mind
> (which will include optional 3D rendering capability through VIRGIL),
> and various display options, ie SDL, vnc and Spice.
> 
> The console layer would need some extensions for this:
> 
> 1) Multi head support, a question which comes up here, is do we only
> add support for multiple heads on a single card, or do we also want
> to support multiple cards each driving a head here ? I myself tend
> to go with the KISS solution for now and only support a single
> card with multiple heads.

Support for multiple cards is there.  Well, at least the groundwork.
The ui core can deal with it.  spice can deal with it.  Secondary qxl
cards used to completely bypass the qemu console subsystem.  This is no
longer the case with qemu 1.5+.

Not all UIs can deal with it in a sane way though.  With SDL and VNC the
secondary qxl card is just another console, so ctrl-alt-<nr> can be used
to switch to it.

Once I had an experimental patch to make the gtk ui open a second window
for the secondary card.  Didn't end up upstream, not in my git tree any
more, IIRC I've dropped it at one of the rebases.  Isn't hard to do
though.


That leaves the question how to do single-card multihead.  I think the
most sensible approach here is to go the spice route, i.e. have one big
framebuffer and define scanout rectangles for the virtual monitors.
This is how real hardware works, and is also provides a natural fallback
mode for UIs not supporting scanout rectangles:  They show a single
window with the whole framebuffer, simliar to old spice clients.

To get that done we effectively have to handle the monitor config
properly at qemu console level instead of having a private channel
between qxl and spice.


> 2) The ability for a video-card generating output to pass a dma-buf
> context to the display (ui in qemu terms) to get the contents from,
> rather then requiring the contents to be rendered to some memory
> buffer. This way we can save the quite expensive read-back from gpu
> memory of the rendered result and then copying that back to the
> framebuffer of the gpu for local displays (ie gtk, SDL),

Hmm?  Not sure what you are asking for...

First, reading from gpu memory isn't expensive.  It's all virtual, no
slow read cycles as with real hardware.  There is almost no difference
between gpu memory and main memory for kvm guests.  It's not clear to me
why you are copying stuff from/to gpu memory.

Second, you can have your scanout framebuffer in main memory.  That
isn't a problem at all.  It only needs to be contiguous in guest
physical memory, scatter-gather for the framebuffer isn't going to fly.

> For proper multi-head support in the ui layer for local displays,
> we will need to use SDL-2, either by porting the current SDL ui code
> to SDL-2, or by introducing a new SDL-2 ui component.

/me votes for new SDL-2 ui component, the historical grown SDL code can
use a rewrite anyway ;)

cheers,
  Gerd



_______________________________________________
Spice-devel mailing list
Spice-devel@xxxxxxxxxxxxxxxxxxxxx
http://lists.freedesktop.org/mailman/listinfo/spice-devel




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [ECOS]     [Asterisk Internet PBX]     [Linux API]     [Monitors]