Le sam. 16 févr. 2019 à 04:48, Hans Verkuil <hverkuil@xxxxxxxxx> a écrit : > > On 2/16/19 10:42 AM, Hans Verkuil wrote: > > On 2/16/19 1:16 AM, Tim Harvey wrote: > >> Greetings, > >> > >> What is needed to be able to take advantage of hardware video > >> composing capabilities and make them available in something like > >> GStreamer? > > > > Are you talking about what is needed in a driver or what is needed in > > gstreamer? Or both? > > > > In any case, the driver needs to support the V4L2 selection API, specifically > > the compose target rectangle for the video capture. > > I forgot to mention that the driver should allow the compose rectangle to > be anywhere within the bounding rectangle as set by S_FMT(CAPTURE). > > In addition, this also means that the DMA has to be able to do scatter-gather, > which I believe is not the case for the imx m2m hardware. I believe the 2D blitter can take an arbitrary source rectangle and compose it to an arbitrary destination rectangle (a lot of these will in fact use Q16 coordinate, allowing for subpixel rectangle, something that V4L2 does not support). I don't think this driver exist in any form upstream on IMX side. The Rockchip dev tried to get one in recently, but the discussion didn't go so well with the rejection of the proposed porter duff controls was probably devoting, as picking the right blending algorithm is the basic of such driver. I believe a better approach for upstreaming such driver would be to write an M2M spec specific to that type of m2m drivers. That spec would cover scalers and rotators, since unlike the IPUv3 (which I believe you are referring too) a lot of the CSC and Scaler are blitters. Why we need a spec, this is because unlike most of our current driver, the buffers passed to CAPTURE aren't always empty buffers. This may have implementation that are ambiguous in current spec. The second is to avoid having to deal with legacy implementation, like we have with decoders. > > Regards, > > Hans > > > > > Regards, > > > > Hans > > > >> > >> Philipp's mem2mem driver [1] exposes the IMX IC and GStreamer's > >> v4l2convert element uses this nicely for hardware accelerated > >> scaling/csc/flip/rotate but what I'm looking for is something that > >> extends that concept and allows for composing frames from multiple > >> video capture devices into a single memory buffer which could then be > >> encoded as a single stream. > >> > >> This was made possible by Carlo's gstreamer-imx [2] GStreamer plugins > >> paired with the Freescale kernel that had some non-mainlined API's to > >> the IMX IPU and GPU. We have used this to take for example 8x analog > >> capture inputs, compose them into a single frame then H264 encode and > >> stream it. The gstreamer-imx elements used fairly compatible > >> properties as the GstCompositorPad element to provide a destination > >> rect within the compose output buffer as well as rotation/flip, alpha > >> blending and the ability to specify background fill. > >> > >> Is it possible that some of this capability might be available today > >> with the opengl GStreamer elements? > >> > >> Best Regards, > >> > >> Tim > >> > >> [1] https://patchwork.kernel.org/patch/10768463/ > >> [2] https://github.com/Freescale/gstreamer-imx > >> > > >