Re: Proposal: A third buffer type for the reconstruction buffers in V4L2 M2M encoder

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

Le mardi 28 juin 2022 à 00:12 +0800, ayaka a écrit :
> Hi All
> 
> I think we need a separate buffer queue that manages those reconstruction or
> auxiliary buffers for those V4L2 M2M drivers.
> 
> There are some drivers already allocating internal buffers as the
> reconstruction buffers for its encoding instance. The
> driver/media/platform/chips-media is one here, its coda_alloc_context_buf()
> would allocate the max allowed references for its instance as the
> reconstruction buffers. You can't control the lifetime of the reconstruction
> buffer here, which means you can't control which buffers would be as the
> reference.
> 
> It may look not bad for a hardware encoder that has a control chip that could
> do some encoding bitrate control. For those stateless encoders, which are
> controlled by the user completely, it would be better to let the user decide
> the lifetime of a reconstruction buffer.
> 
> For the SVC situation, a layer may refer to a buffer in another layer, which
> is encoded many times ago.

I would love to see a proposal for SVC support, that would greatly help to
understand where external reconstructed frames buffer management falls in. Just
"controlling lifetime" is to weak of a justification for the added complexity.

> 
> I am not sure which way is better, I would implement one from the feedback.
> One is reusing V4L2_BUF_TYPE_VIDEO_OVERLAY, it would support REQ_BUFS,

I don't think a re-purpose is a good idea.

> SET_FMT, GET_FMT, QBUF, and DQBUF besides the existing m2m operations. Another
> idea is extending those ioctls to the media node that the stateless m2m driver
> uses for allocation request_fd token.

CODA goes passed this, it hides an internal pixel format, which have no use
outside of the chip. We'd have to introduce more vendor formats in order to
allow S_FMT and friend. Having to queue reference buffers also requires in depth
knowledge of the decoding process, which is for stateful decoder a miss-fit. I
think.

> 
> Please notice that the reconstruction could use a different pixel format than
> the one used in input frames. For example, Hantro H1 could use the NV12_4L4 in
> its reconstruction buffer and an FBC format in the later generation of chips-
> media's codecs.
> Also, there are some decoders having an online post-processor. This means it
> can't do pixel format transforming independently. The driver for those devices
> may need this.

Even for decoder, when then is an inline post-processor, an extra set of buffer
is allocated internally.

I'm not sure what I could propose on top here, since there is very little
skeleton in this proposal. It is more a feature request, so stepping back a
little, perhaps we should start with real-life use cases that needs this and
from there we can think of a flow ?

> 
> Sincerely
> Randy





[Index of Archives]     [Linux Input]     [Video for Linux]     [Gstreamer Embedded]     [Mplayer Users]     [Linux USB Devel]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [Yosemite Backpacking]

  Powered by Linux