Re: [PATCH RFC 00/12] Remote Virgl support

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

On Po, 2016-07-18 at 16:16 +0200, Gerd Hoffmann wrote:
>   Hi,
> 
> > > What is the state of the hardware supported encoding?
> > > How can we pass buffers to the hardware encoder? 
> > 
> > The state here is a bit of a mess.
> > One reason to pass texture instead of dma buffers is that we use gstreamer
> > and gstreamer for hardware acceleration uses VAAPI and one way to pass
> > frames to VAAPI is to use GL textures. There is a quite strong requirement
> > that dma buffers should be mmap-able in gstreamer but this is not true.
> > Note that in theory VAAPI can import DRM prime/dma buffers however this is
> > currently not exported/implemented by gstreamer-vaapi.
> 
> Any chance to extend gstreamer-vaapi?
> 
> > The current status of hardware encoding is a bit confusing.
> > On one side there is VAAPI which should be an independent (from card brand)
> > library to use hardware decoding/encoding however some vendor (like Nvidia)
> > seems not really keen on supporting it for encoding (the binding for Nvidia
> > is using vdpau which is limited to decoding). VAAPI was proposed by Intel
> > so for Intel is really good.
> 
> Hmm.  But vaapi support is pretty much required to have gstreamer handle
> video encoding for us I guess?
> 
> > On the other side we could have patent/licensing issues due to the fact that
> > main encoding supported (basically mpeg2, h264, hevc) all have patents while
> > more open encoding (vp8, vp9) are not currently widely supported.
> 
> That is nasty indeed.  Recent intel hardware supports vp8 + vp9 too.
> Nvidia is H.264 only as far I know.
> 
> Not sure if offloading the encoding to the hardware helps with the
> patent situation.  At least we don't have to ship a (cpu) software
> encoder then.  But possibly some kind of firmware or gpu program must be
> uploaded ...
> 

Cisco's OpenH264 encodes and decodes basic profile of h.264 and Cisco is
covering patent fees for anybody who uses it. It is not shipped with
everybody's OS by default but given the motivation and implementation,
it should be available pretty much anywhere in quite near term (it
actually is available on most systems running Firefox - but it Firefox's
directories, not system ones).

Cisco already provides a Fedora repo with decoder packaged for Firefox
and Gstreamer. Given that, we could probably depend on h.264 basic being
available universally pretty soon.

David

Fedora's OpenH264 wiki page: https://fedoraproject.org/wiki/OpenH264

> > >  (1) Extend the display channel interface to have callbacks for these
> > >      (and thin wrapper functions which map spice display channel to
> > >      QemuConsole so spice-server doesn't need to worry about that).
> > > 
> > 
> > So you mean a way for spice-server display channel to call some Qemu
> > function, right?
> 
> Yes.  Add function pointers to QXLInterface & raise minor display
> channel interface version should do.
> 
> > >  (2) Have qemu create one context per spice-server (or per display
> > >      channel) and create a new spice_server_set_egl_context() function
> > >      to hand over the context to spice-server.
> > > 
> > 
> > Yes, I added a spice_qxl_gl_init function which set display and context.
> 
> Ah, didn't notice that on the first check.  Yes, that looks good
> interface-wise.  Possibly we should create a new context instead of
> reusing qemu_egl_rn_ctx.  But that doesn't affect the spice-server
> interface.
> 
> But can you make that a separate patch please?
> 
> > Probably I feel more confident having more flexibility as is not clear
> > the resulting information we need.
> 
> I surely don't want to rush things on the interface side.  I think we
> should have at least a proof-of-concept implementation showing the
> qemu/spice-server we created actually works before merging things.
> 
> > Another stuff I would like to have changed in Qemu is the number of
> > pending frames sent by it. I think that a single frame is not enough.
> > There should be at least 2/3 frames. The reason is that streaming/network
> > requires more time and would be better if there could be one frame which
> > is encoding/pending and one which is new which could be replaced by a
> > new frame that will arrive if encoding/network is not fast enough.
> 
> Hmm.  The guest will not give us 2-3 frames though.  We'll have either
> one or two, depending on whenever the guest uses page-flips or not.  So
> implementing a buffering scheme as outlined above requires us to copy
> (or let the gpu copy) the frames.
> 
> Question is where to do it best.  It is doable in qemu and spice-server.
> But spice-server knows more about the network/encoder state and can
> possibly avoid the copy, for example in case it isn't going the encode
> the frame anyway due to full network buffers, or in case no client is
> connected in the first place.
> 
> cheers,
>   Gerd
> 
> _______________________________________________
> Spice-devel mailing list
> Spice-devel@xxxxxxxxxxxxxxxxxxxxx
> https://lists.freedesktop.org/mailman/listinfo/spice-devel


_______________________________________________
Spice-devel mailing list
Spice-devel@xxxxxxxxxxxxxxxxxxxxx
https://lists.freedesktop.org/mailman/listinfo/spice-devel




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [ECOS]     [Asterisk Internet PBX]     [Linux API]     [Monitors]