On Sat, Feb 16, 2019 at 1:14 PM Nicolas Dufresne <nicolas@xxxxxxxxxxxx> wrote: > > Le sam. 16 févr. 2019 à 13:40, Hans Verkuil <hverkuil@xxxxxxxxx> a écrit : > > > > On 2/16/19 4:42 PM, Nicolas Dufresne wrote: > > > Le sam. 16 févr. 2019 à 04:48, Hans Verkuil <hverkuil@xxxxxxxxx> a écrit : > > >> > > >> On 2/16/19 10:42 AM, Hans Verkuil wrote: > > >>> On 2/16/19 1:16 AM, Tim Harvey wrote: > > >>>> Greetings, > > >>>> > > >>>> What is needed to be able to take advantage of hardware video > > >>>> composing capabilities and make them available in something like > > >>>> GStreamer? > > >>> > > >>> Are you talking about what is needed in a driver or what is needed in > > >>> gstreamer? Or both? > > >>> > > >>> In any case, the driver needs to support the V4L2 selection API, specifically > > >>> the compose target rectangle for the video capture. > > >> > > >> I forgot to mention that the driver should allow the compose rectangle to > > >> be anywhere within the bounding rectangle as set by S_FMT(CAPTURE). > > >> > > >> In addition, this also means that the DMA has to be able to do scatter-gather, > > >> which I believe is not the case for the imx m2m hardware. > > > > > > I believe the 2D blitter can take an arbitrary source rectangle and > > > compose it to an arbitrary destination rectangle (a lot of these will > > > in fact use Q16 coordinate, allowing for subpixel rectangle, something > > > that V4L2 does not support). > > > > Not entirely true. I think this can be done through the selection API, > > although it would require some updating of the spec and perhaps the > > introduction of a field or flag. The original VIDIOC_CROPCAP and VIDIOC_CROP > > ioctls actually could do this since with analog video (e.g. S-Video) you > > did not really have the concept of a 'pixel'. It's an analog waveform after > > all. In principle the selection API works in the same way, even though the > > documentation always assumes that the selection rectangles map directly on > > the digitized pixels. I'm not sure if there are still drivers that report > > different crop bounds in CROPCAP compared to actual number of digitized pixels. > > The bttv driver is most likely to do that, but I haven't checked. > > > > Doing so made it very hard to understand, though. > > > > I don't think this driver exist in any > > > form upstream on IMX side. The Rockchip dev tried to get one in > > > recently, but the discussion didn't go so well with the rejection of > > > the proposed porter duff controls was probably devoting, as picking > > > the right blending algorithm is the basic of such driver. > > > > I tried to find the reason why the Porter Duff control was dropped in v8 > > of the rockchip RGA patch series back in 2017. > > > > I can't find any discussion about it on the mailinglist, so perhaps it > > was discussed on irc. > > > > Do you remember why it was removed? > > I'll try and retrace what happened, it was not a nack, and I realize > that "rejection" wasn't the right word, but if I remember, the focus > in the review went fully around this and the fact that it was doing > blending which such API, while the original intention with the driver > was to have CSC, so removing this was basically a way forward. > > > > > > > > > I believe a better approach for upstreaming such driver would be to > > > write an M2M spec specific to that type of m2m drivers. That spec > > > would cover scalers and rotators, since unlike the IPUv3 (which I > > > believe you are referring too) a lot of the CSC and Scaler are > > > blitters. > > > > No, I was referring to the imx m2m driver that Phillip is working on. > > I'll need to check what driver Veolab was using, but if it's the > same, then maybe it only do source-over operations using SELECTION as > you described. If I remember their use case, they where doing simple > source-over blending of two video feeds. > > Could it be this ? > https://gitlab.com/veo-labs/linux/tree/veobox/drivers/staging/media/imx6/m2m > Is it an ancester of Philipp's driver ? > It does look like this was an ancestor of Philipp's mem2mem driver which is currently under review (https://patchwork.kernel.org/project/linux-media/list/?series=67977) Hans, can you give me a little more detail about what would be needed in Phillipp's mem2mem driver to do this (or if its already there). I imagine what we are talking about is being able to specify the destination buffer and a rect within it. I will take a look at the gstreamer plugin at https://gitlab.freedesktop.org/gstreamer/gst-plugins-good/issues/308 and see if I can get that building on top of master. It sounds like that's a good path towards hardware accelerated composing. Thanks! Tim