Re: [PATCH RFC EXP] remote Virgl support

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> 
> Hi
> 
> ----- Original Message -----
> 
> > > Do you know which one? At the beginning I was worried this was true
> > > even for QXL but is not. Looks like it only happens for Virgl.
> > > 
> > 
> > See some commits here: https://github.com/elmarco/qemu/commits/virgl
> > 
> > In particular
> > https://github.com/elmarco/qemu/commit/ee6c0cf4d639ac32b9fad9e3db7cffdea6b2599f
> 
> It could be that it is no longer a valid commit, now that qemu/spice uses GL
> scanout even for the console (it used to be fine when we had the dual QXL /
> GL mode)
> 

Some updates.

Got some improvements over my initial patch.
Less memory copies.
I'm trying to use vaapi and performances are much better, but I'm still using
copy to system memory than fed to vaapi.

Got a reply from gstreamer, see https://lists.freedesktop.org/archives/gstreamer-devel/2016-June/059206.html.

Looks that code for gstreamer-vaapi is quite new and all the layers are not
that stable. There are multiple APIs to do encoding in HW but no common
path. For instance Nvidia bindings for vaapi do not offer encoding but just
decoding.

I got a kernel 4.6 (f24 update) where i915 code is able to mmap but the memory
is still arranged on the same tiled way even if querying for tiling
(DRM_IOCTL_I915_GEM_SET_TILING ioctl) is returning not tiled, I would like
to try another card to check the memory layout.
I'm still using i915 direct ioctls.


The number of tests I did to tune the path for the network is uncountable!
Looks like when streaming and network is used you get an additional latency.
Note that using localhost this latency quite disappears. Could be related to
NODELAY/nagle, could be related to network buffering. I'm trying to keep an eye
on different stuff (CPU, bandwidth, network queues), perhaps too much stuff
at the same time. Also looks like my router (domestic one) is sometimes
loosing some packets which cause network retransmission, the good thing is
that it appears that code then decrease the bandwidth usage for a while till
connection get stable again.
What I can do: I can play full screen videos on 1024x768 using about 450KB/s,
the audio continue to be good. The latency sometimes is good sometimes
get really high. I can play extreme tux racer sometimes quite well, for
openarena the delay is too big (the ping time is about 170ms on average but
the delay looks about 1 second).


Did some play with gstreamer trying to understand possible way to fed
an egl image directly. Turns out that vaapipostproc accepts
video/x-raw(meta:GstVideoGLTextureUploadMeta) which should be a stable
way to fed textures, I tried with some commands like

GST_GL_API=opengl GST_GL_PLATFORM=egl gst-launch-1.0 -v \
   filesrc location=bbb_sunflower_1080p_30fps_normal.mp4 ! \
   qtdemux ! vaapidecode ! \
   'video/x-raw(meta:GstVideoGLTextureUploadMeta),format=RGBA' ! \
   vaapipostproc ! 'video/x-raw(meta:GstVideoGLTextureUploadMeta),format=RGBA' ! \
   vaapih264enc ! video/x-h264,profile=high ! qtmux ! filesink location=tmp.mov

(yes, it's one command!) but it fails with

WARNING: erroneous pipeline: could not link vaapidecode0 to vaapipostproc0

Trying to play a video with vaapi:

GST_GL_API=opengl gst-launch-1.0 filesrc \
   location=bbb_sunflower_1080p_30fps_normal.mp4 ! qtdemux ! \
   vaapidecode ! 'video/x-raw(meta:GstVideoGLTextureUploadMeta)' ! \
   glimagesink

I got 

intel_do_flush_locked failed: No such file or directory

and strace says:

[pid 22813] ioctl(10, DRM_IOCTL_I915_GEM_SW_FINISH, 0x7f4a1880ab30) = 0
[pid 22813] ioctl(10, DRM_IOCTL_I915_GEM_EXECBUFFER2, 0x7f4a1880aac0) = -1 ENOENT (No such file or directory)
[pid 22813] ioctl(10, DRM_IOCTL_I915_GEM_THROTTLE or DRM_IOCTL_RADEON_CP_RESUME, 0) = 0

so looks all very experimental. Looks like a lot of work in the gstreamer
to make these stuff work. Perhaps I'll do some hack to use vaapi directly
but I liked to have gstreamer handling some bitrate and live streaming
for us.


Today I had a look at our streaming code when is activated and how to
change it to pass a texture. encode_frame accepts a SpiceBitmap, passing a
SpiceImage with a new type (descriptor.type) could be a way. Unfortunately
the streaming starts detecting the graduality of the frame which
requires image data (which we shouldn't get) computation but probably
we could fake it and turn streaming on if we get textures without reading
image data. Using lazy data extraction to avoid this operation when frames
are dropped could be an improvement but I don't know if is worth.

Frediano
_______________________________________________
Spice-devel mailing list
Spice-devel@xxxxxxxxxxxxxxxxxxxxx
https://lists.freedesktop.org/mailman/listinfo/spice-devel




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [ECOS]     [Asterisk Internet PBX]     [Linux API]     [Monitors]