v4l2 mem2mem compose support?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Greetings,

What is needed to be able to take advantage of hardware video
composing capabilities and make them available in something like
GStreamer?

Philipp's mem2mem driver [1] exposes the IMX IC and GStreamer's
v4l2convert element uses this nicely for hardware accelerated
scaling/csc/flip/rotate but what I'm looking for is something that
extends that concept and allows for composing frames from multiple
video capture devices into a single memory buffer which could then be
encoded as a single stream.

This was made possible by Carlo's gstreamer-imx [2] GStreamer plugins
paired with the Freescale kernel that had some non-mainlined API's to
the IMX IPU and GPU. We have used this to take for example 8x analog
capture inputs, compose them into a single frame then H264 encode and
stream it. The gstreamer-imx elements used fairly compatible
properties as the GstCompositorPad element to provide a destination
rect within the compose output buffer as well as rotation/flip, alpha
blending and the ability to specify background fill.

Is it possible that some of this capability might be available today
with the opengl GStreamer elements?

Best Regards,

Tim

[1] https://patchwork.kernel.org/patch/10768463/
[2] https://github.com/Freescale/gstreamer-imx



[Index of Archives]     [Linux Input]     [Video for Linux]     [Gstreamer Embedded]     [Mplayer Users]     [Linux USB Devel]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [Yosemite Backpacking]

  Powered by Linux