Re: [PATCH 0/4] Enable Hantro G1 post-processor

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

On 9/12/19 8:35 AM, Ezequiel Garcia wrote:
> On Thu, 2019-09-12 at 14:52 +0900, Tomasz Figa wrote:
>> On Thu, Sep 12, 2019 at 4:49 AM Nicolas Dufresne <nicolas@xxxxxxxxxxxx> wrote:
>>> Le mercredi 11 septembre 2019 à 09:27 +0100, Ezequiel Garcia a écrit :
>>>> On Mon, 2019-09-09 at 16:07 +0900, Tomasz Figa wrote:
>>>>> Hi Ezequiel,
>>>>>
>>>>> On Wed, Sep 4, 2019 at 3:17 AM Ezequiel Garcia <ezequiel@xxxxxxxxxxxxx> wrote:
>>>>>> Hi all,
>>>>>>
>>>>>> This series enables the post-processor support available
>>>>>> on the Hantro G1 VPU. The post-processor block can be
>>>>>> pipelined with the decoder hardware, allowing to perform
>>>>>> operations such as color conversion, scaling, rotation,
>>>>>> cropping, among others.
>>>>>>
>>>>>> The decoder hardware needs its own set of NV12 buffers
>>>>>> (the native decoder format), and the post-processor is the
>>>>>> owner of the CAPTURE buffers. This allows the application
>>>>>> get processed (scaled, converted, etc) buffers, completely
>>>>>> transparently.
>>>>>>
>>>>>> This feature is implemented by exposing other CAPTURE pixel
>>>>>> formats to the application (ENUM_FMT). When the application
>>>>>> sets a pixel format other than NV12, the driver will enable
>>>>>> and use the post-processor transparently.
>>>>>
>>>>> I'll try to review the series a bit later, but a general comment here
>>>>> is that the userspace wouldn't have a way to distinguish between the
>>>>> native and post-processed formats. I'm pretty much sure that
>>>>> post-processing at least imposes some power penalty, so it would be
>>>>> good if the userspace could avoid it if unnecessary.
>>>>>
>>>>
>>>> Hm, that's true, good catch.
>>>>
>>>> So, it would be desirable to retain the current behavior of allowing
>>>> the application to just set a different pixel format and get
>>>> a post-processed frame, transparently.
>>>>
>>>> But at the same time, it would be nice if the application is somehow
>>>> aware of the post-processing happening. Maybe we can expose a more
>>>> accurate media controller topology, have applications enable
>>>> the post-processing pipeline explicitly.
>>>
>>> How it works on the stateful side is that userspace set the encoding
>>> type (the codec), then passes a header (in our case, there will be
>>> parsed structures replacing this) first. The driver then configure
>>> capture format, giving a hint of the "default" or "native" format. This
>>> may or may not be sufficient, but it does work in giving userspace a
>>> hint.
>>
>> The bad side of that is that we can't handle more than 1 native format.
>>
>> For the most backwards-compatible behavior, sorting the results of
>> ENUM_FMT according to format preference would allow the applications
>> that choose the first format returned that works for them to choose
>> the best one.
>>
>> For a further improvement, an ENUM_FMT flag that tells the userspace
>> that a format is preferred could work.
>>
>> That said, modelling the pipeline appropriately using the media
>> controller is the idea I like the most, because it's the most
>> comprehensive solution. That would have to be well specified and
>> documented, though, and sounds like a long term effort.
>>

I was playing with vimc to emulate this topology.

What I would suggest is a topology like this:

Device topology
- entity 1: Decoder (2 pads, 3 links)
            type V4L2 subdev subtype Unknown flags 0
        pad0: Sink
                <- "M2M Video Node":1 [ENABLED]
        pad1: Source
                -> "M2M Video Node":0 [ENABLED]
                -> "PostProc":0 []

- entity 4: PostProc (2 pads, 2 links)
            type V4L2 subdev subtype Unknown flags 0
        pad0: Sink
                <- "Decoder":1 []
        pad1: Source
                -> "M2M Video Node":0 []

- entity 7: M2M Video Node (2 pads, 3 links)
            type Node subtype V4L flags 0
            device node name /dev/video0
        pad0: Sink
                <- "Decoder":1 [ENABLED]
                <- "PostProc":1 []
        pad1: Source
                -> "Decoder":0 [ENABLED]

Dot file:
---------
digraph board {
	rankdir=TB
	n00000001 [label="{{<port0> 0} | Decoder\n | {<port1> 1}}", shape=Mrecord, style=filled, fillcolor=green]
	n00000001:port1 -> n00000007
	n00000001:port1 -> n00000004:port0 [style=dashed]
	n00000004 [label="{{<port0> 0} | PostProc\n | {<port1> 1}}", shape=Mrecord, style=filled, fillcolor=green]
	n00000004:port1 -> n00000007 [style=dashed]
	n00000007 [label="M2M Video Node\n/dev/video0", shape=box, style=filled, fillcolor=yellow]
	n00000007 -> n00000001:port0
}


Also, as you mentioned in patch 4:

> On the YUV-to-RGB path, the post-processing pipeline
> allows to modify the image contrast, brigthness and saturation,
> so additional user controls are added to expose them.

So I think it would make sense to expose these controls in the post-processor subdev,
through a /dev/v4l-subdev0, this avoids having a dynamic set of controls in the video node
depending on the path.

You don't even need to implement VIDIOC_{G,S}_FORMAT in the pads (at least v4l2-compliance
doesn't complain, it is happy with "Not Supported"). So the post-processor /dev/v4l-subdev0
devnode could be used to expose only these controls, and you don't need to expose a devnode
to the Decoder entity above (unless you have another reason).

I hope this helps,
Helen



[Index of Archives]     [Linux Input]     [Video for Linux]     [Gstreamer Embedded]     [Mplayer Users]     [Linux USB Devel]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [Yosemite Backpacking]

  Powered by Linux