Re: Hantro H1 Encoding Upstreaming

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

Le Wed 15 Jan 25, 14:43, Nicolas Dufresne a écrit :
> Le mercredi 15 janvier 2025 à 16:03 +0100, Paul Kocialkowski a écrit :
> > Last words about private driver buffers (such as motion vectors and
> > reconstruction buffers), I think they should remain private and unseen from
> > userspace. We could add something extra to the uAPI later if there is really a
> > need to access those.
> 
> I don't know if you noticed, but Jacopo started a proposal around multi-context
> media controller. For this type of extension, my long term idea was that we
> could adopt this, and introduced new nodes to expose specialized memory. These
> nodes would be unlike by default, meaning the default behaviour with a single
> m2m video node would remain.
> 
> An existing use case for that would be in the decoder space, VC8000D and up have
> 4 post processed output, which mean up to 5 outputs if you count the reference
> frames. So we could set it up:

Sounds very interesting to handle multi-core codecs and devices with some
separate post-processing output (IIRC the allwinner video decoder can have some
extra thumbnail output which can be very handy for JPEG stuff).

> Simpler said then done, but I think this can work. I suspect it is quite
> feasible to keep the stream state separated, allowing to reconfigure the chosen
> output resolution without having to reset the decoder state (which is only bound
> to reference frames). It also solve few issues we have in regard to over-memory
> allocation when we hide the reference frames.
> 
> For encoders, reconstruction frames would also be capture nodes. I'm not
> completely versed into what they can be used for, also their pixel format would
> have to be known to be useful of course.

Makes a lot of sense. Honestly this is starting to look like the ISP situation
where we have multiple video nodes dedicated to specific things and various
specific buffer formats for them. This brings a lot of flexibiliy and many
possibilities for decoders/encoders.

In contrast the ISP API uses a separate video device for metadata/configuration
submission, which we do through the request API and controls for the
decoder/encoder cases. But we could imagine adding extra source video nodes to
provide e.g. random bitstream units to stuff for encoding. And just make sure
they are submitted with the same request. I guess that should work since the
request is a media-wide object and not video node specific.

Anyways, like you say, simpler said than done but it seems like a reasonable
design extension that would solve a lot of current API limitations.

Cheers,

Paul

-- 
Paul Kocialkowski,

Independent contractor - sys-base - https://www.sys-base.io/
Free software developer - https://www.paulk.fr/

Expert in multimedia, graphics and embedded hardware support with Linux.

Attachment: signature.asc
Description: PGP signature


[Index of Archives]     [Linux Input]     [Video for Linux]     [Gstreamer Embedded]     [Mplayer Users]     [Linux USB Devel]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [Yosemite Backpacking]

  Powered by Linux