Re: [PATCH 10/10] venus: dec: make decoder compliant with stateful codec API

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Feb 7, 2019 at 4:33 PM Tomasz Figa <tfiga@xxxxxxxxxxxx> wrote:
>
> On Tue, Feb 5, 2019 at 7:35 PM Hans Verkuil <hverkuil@xxxxxxxxx> wrote:
> >
> > On 2/5/19 10:31 AM, Tomasz Figa wrote:
> > > On Tue, Feb 5, 2019 at 6:00 PM Hans Verkuil <hverkuil@xxxxxxxxx> wrote:
> > >>
> > >> On 2/5/19 7:26 AM, Tomasz Figa wrote:
> > >>> On Fri, Feb 1, 2019 at 12:18 AM Nicolas Dufresne <nicolas@xxxxxxxxxxxx> wrote:
> > >>>>
> > >>>> Le jeudi 31 janvier 2019 à 22:34 +0900, Tomasz Figa a écrit :
> > >>>>> On Thu, Jan 31, 2019 at 9:42 PM Philipp Zabel <p.zabel@xxxxxxxxxxxxxx> wrote:
> > >>>>>> Hi Nicolas,
> > >>>>>>
> > >>>>>> On Wed, 2019-01-30 at 10:32 -0500, Nicolas Dufresne wrote:
> > >>>>>>> Le mercredi 30 janvier 2019 à 15:17 +0900, Tomasz Figa a écrit :
> > >>>>>>>>> I don't remember saying that, maybe I meant to say there might be a
> > >>>>>>>>> workaround ?
> > >>>>>>>>>
> > >>>>>>>>> For the fact, here we queue the headers (or first frame):
> > >>>>>>>>>
> > >>>>>>>>> https://gitlab.freedesktop.org/gstreamer/gst-plugins-good/blob/master/sys/v4l2/gstv4l2videodec.c#L624
> > >>>>>>>>>
> > >>>>>>>>> Then few line below this helper does G_FMT internally:
> > >>>>>>>>>
> > >>>>>>>>> https://gitlab.freedesktop.org/gstreamer/gst-plugins-good/blob/master/sys/v4l2/gstv4l2videodec.c#L634
> > >>>>>>>>> https://gitlab.freedesktop.org/gstreamer/gst-plugins-good/blob/master/sys/v4l2/gstv4l2object.c#L3907
> > >>>>>>>>>
> > >>>>>>>>> And just plainly fails if G_FMT returns an error of any type. This was
> > >>>>>>>>> how Kamil designed it initially for MFC driver. There was no other
> > >>>>>>>>> alternative back then (no EAGAIN yet either).
> > >>>>>>>>
> > >>>>>>>> Hmm, was that ffmpeg then?
> > >>>>>>>>
> > >>>>>>>> So would it just set the OUTPUT width and height to 0? Does it mean
> > >>>>>>>> that gstreamer doesn't work with coda and mtk-vcodec, which don't have
> > >>>>>>>> such wait in their g_fmt implementations?
> > >>>>>>>
> > >>>>>>> I don't know for MTK, I don't have the hardware and didn't integrate
> > >>>>>>> their vendor pixel format. For the CODA, I know it works, if there is
> > >>>>>>> no wait in the G_FMT, then I suppose we are being really lucky with the
> > >>>>>>> timing (it would be that the drivers process the SPS/PPS synchronously,
> > >>>>>>> and a simple lock in the G_FMT call is enough to wait). Adding Philipp
> > >>>>>>> in CC, he could explain how this works, I know they use GStreamer in
> > >>>>>>> production, and he would have fixed GStreamer already if that was
> > >>>>>>> causing important issue.
> > >>>>>>
> > >>>>>> CODA predates the width/height=0 rule on the coded/OUTPUT queue.
> > >>>>>> It currently behaves more like a traditional mem2mem device.
> > >>>>>
> > >>>>> The rule in the latest spec is that if width/height is 0 then CAPTURE
> > >>>>> format is determined only after the stream is parsed. Otherwise it's
> > >>>>> instantly deduced from the OUTPUT resolution.
> > >>>>>
> > >>>>>> When width/height is set via S_FMT(OUT) or output crop selection, the
> > >>>>>> driver will believe it and set the same (rounded up to macroblock
> > >>>>>> alignment) on the capture queue without ever having seen the SPS.
> > >>>>>
> > >>>>> That's why I asked whether gstreamer sets width and height of OUTPUT
> > >>>>> to non-zero values. If so, there is no regression, as the specs mimic
> > >>>>> the coda behavior.
> > >>>>
> > >>>> I see, with Philipp's answer it explains why it works. Note that
> > >>>> GStreamer sets the display size on the OUTPUT format (in fact we pass
> > >>>> as much information as we have, because a) it's generic code and b) it
> > >>>> will be needed someday when we enable pre-allocation (REQBUFS before
> > >>>> SPS/PPS is passed, to avoid the setup delay introduce by allocation,
> > >>>> mostly seen with CMA base decoder). In any case, the driver reported
> > >>>> display size should always be ignored in GStreamer, the only
> > >>>> information we look at is the G_SELECTION for the case the x/y or the
> > >>>> cropping rectangle is non-zero.
> > >>>>
> > >>>> Note this can only work if the capture queue is not affected by the
> > >>>> coded size, or if the round-up made by the driver is bigger or equal to
> > >>>> that coded size. I believe CODA falls into the first category, since
> > >>>> the decoding happens in a separate set of buffers and are then de-tiled
> > >>>> into the capture buffers (if understood correctly).
> > >>>
> > >>> Sounds like it would work only if coded size is equal to the visible
> > >>> size (that GStreamer sets) rounded up to full macroblocks. Non-zero x
> > >>> or y in the crop could be problematic too.
> > >>>
> > >>> Hans, what's your view on this? Should we require G_FMT(CAPTURE) to
> > >>> wait until a format becomes available or the OUTPUT queue runs out of
> > >>
> > >> You mean CAPTURE queue? If not, then I don't understand that part.
> > >
> > > No, I exactly meant the OUTPUT queue. The behavior of s5p-mfc in case
> > > of the format not being detected yet is to waits for any pending
> > > bitstream buffers to be processed by the decoder before returning an
> > > error.
> > >
> > > See https://elixir.bootlin.com/linux/v5.0-rc5/source/drivers/media/platform/s5p-mfc/s5p_mfc_dec.c#L329
> >
> > It blocks?! That shouldn't happen. Totally against the spec.
> >
>
> Yeah and that's what this patch tries to implement in venus as well
> and is seemingly required for compatibility with gstreamer...
>

Hans, Nicolas, any thoughts?

> > > .
> > >
> > >>
> > >>> buffers?
> > >>
> > >> First see my comment here regarding G_FMT returning an error:
> > >>
> > >> https://www.spinics.net/lists/linux-media/msg146505.html
> > >>
> > >> In my view that is a bad idea.
> > >
> > > I don't like it either, but it seemed to be the most consistent and
> > > compatible behavior, but I'm not sure anymore.
> > >
> > >>
> > >> What G_FMT should return between the time a resolution change was
> > >> detected and the CAPTURE queue being drained (i.e. the old or the new
> > >> resolution?) is something I am not sure about.
> > >
> > > Note that we're talking here about the initial stream information
> > > detection, when the driver doesn't have any information needed to
> > > determine the CAPTURE format yet.
> >
> > IMHO the driver should just start off with some default format, it
> > really doesn't matter what that is.
> >
>
> I guess that's fine indeed.
>
> > This initial situation is really just a Seek operation: you have a format,
> > you seek to a new position and when you find the resolution of the
> > first frame in the bitstream it triggers a SOURCE_CHANGE event. Actually,
> > to be really consistent with the Seek: you only need to trigger this event
> > if 1) the new resolution is different from the current format, or 2) the
> > capture queue is empty. 2) will never happen during a normal Seek, so
> > that's a little bit special to this initial situation.
>
> Having the error returned allowed the applications to handle the
> initial parsing without the event, though. It could have waited for
> all the OUTPUT buffers to be dequeued and then call G_FMT to check if
> that was enough data to obtain the format.

Any thoughts on this one too?

>
> >
> > The only open question is what should be done with any CAPTURE buffers
> > that the application may have queued? Return one buffer with bytesused
> > set to 0 and the LAST flag set? I think that would be consistent with
> > the specification. I think this situation can happen during a regular
> > seek operation as well.
>
> Yes, that would be reasonably consistent.
>
> >
> > >
> > >>
> > >> On the one hand it is desirable to have the new resolution asap, on
> > >> the other hand, returning the new resolution would mean that the
> > >> returned format is inconsistent with the capture buffer sizes.
> > >>
> > >> I'm leaning towards either returning the new resolution.
> > >
> > > Is the "or ..." part of the sentence missing?
> >
> > Sorry, 'either' should be dropped from that sentence.
> >
>
> Got it, thanks for clarification.
>
> > >
> > > One of the major concerns was that we needed to completely stall the
> > > pipeline in case of a resolution change, which made it hard to deliver
> > > a seamless transition to the users. An idea that comes to my mind
> > > would be extending the source change event to actually include the
> > > v4l2_format struct describing the new format. Then the CAPTURE queue
> > > could keep the old format until it is drained, which should work fine
> > > for existing applications, while the new ones could use the new event
> > > data to determine if the buffers need to be reallocated.
> >
> > In my opinion G_FMT should return the new resolution after the
> > SOURCE_CHANGE event was sent. So you know the new resolution at that
> > point even though there may still be capture buffers pending with the
> > old resolution.
>
> Agreed. That's how it's defined in current version.
>
> >
> > What would be nice to have though is that the resolution could be part
> > of v4l2_buffer. So as long as the new resolution would still fit inside
> > the allocated buffers, you could just continue decoding without having
> > to send a SOURCE_CHANGE event.
>
> Hmm, I'm not sure I follow. What we effectively care about when
> deciding if we can continue decoding is whether the new sizeimage is
> less or equal to the current buffer size. However we still need to
> tell the application that the geometry changed. Otherwise it would
> have to call G_FMT and G_SELECTION every frame and compare with
> previous frame.

And this one.

Best regards,
Tomasz




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [Linux for Sparc]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux MIPS]     [ECOS]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux