Re: Proposed updates and guidelines for MPEG-2, H.264 and H.265 stateless support

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, May 21, 2019 at 12:23:46PM -0400, Nicolas Dufresne wrote:
> Le mardi 21 mai 2019 à 17:43 +0200, Thierry Reding a écrit :
> > On Wed, May 15, 2019 at 07:42:50PM +0200, Paul Kocialkowski wrote:
> > > Hi,
> > > 
> > > Le mercredi 15 mai 2019 à 10:42 -0400, Nicolas Dufresne a écrit :
> > > > Le mercredi 15 mai 2019 à 12:09 +0200, Paul Kocialkowski a écrit :
> > > > > Hi,
> > > > > 
> > > > > With the Rockchip stateless VPU driver in the works, we now have a
> > > > > better idea of what the situation is like on platforms other than
> > > > > Allwinner. This email shares my conclusions about the situation and how
> > > > > we should update the MPEG-2, H.264 and H.265 controls accordingly.
> > > > > 
> > > > > - Per-slice decoding
> > > > > 
> > > > > We've discussed this one already[0] and Hans has submitted a patch[1]
> > > > > to implement the required core bits. When we agree it looks good, we
> > > > > should lift the restriction that all slices must be concatenated and
> > > > > have them submitted as individual requests.
> > > > > 
> > > > > One question is what to do about other controls. I feel like it would
> > > > > make sense to always pass all the required controls for decoding the
> > > > > slice, including the ones that don't change across slices. But there
> > > > > may be no particular advantage to this and only downsides. Not doing it
> > > > > and relying on the "control cache" can work, but we need to specify
> > > > > that only a single stream can be decoded per opened instance of the
> > > > > v4l2 device. This is the assumption we're going with for handling
> > > > > multi-slice anyway, so it shouldn't be an issue.
> > > > 
> > > > My opinion on this is that the m2m instance is a state, and the driver
> > > > should be responsible of doing time-division multiplexing across
> > > > multiple m2m instance jobs. Doing the time-division multiplexing in
> > > > userspace would require some sort of daemon to work properly across
> > > > processes. I also think the kernel is better place for doing resource
> > > > access scheduling in general.
> > > 
> > > I agree with that yes. We always have a single m2m context and specific
> > > controls per opened device so keeping cached values works out well.
> > > 
> > > So maybe we shall explicitly require that the request with the first
> > > slice for a frame also contains the per-frame controls.
> > > 
> > > > > - Annex-B formats
> > > > > 
> > > > > I don't think we have really reached a conclusion on the pixel formats
> > > > > we want to expose. The main issue is how to deal with codecs that need
> > > > > the full slice NALU with start code, where the slice_header is
> > > > > duplicated in raw bitstream, when others are fine with just the encoded
> > > > > slice data and the parsed slice header control.
> > > > > 
> > > > > My initial thinking was that we'd need 3 formats:
> > > > > - One that only takes only the slice compressed data (without raw slice
> > > > > header and start code);
> > > > > - One that takes both the NALU data (including start code, raw header
> > > > > and compressed data) and slice header controls;
> > > > > - One that takes the NALU data but no slice header.
> > > > > 
> > > > > But I no longer think the latter really makes sense in the context of
> > > > > stateless video decoding.
> > > > > 
> > > > > A side-note: I think we should definitely have data offsets in every
> > > > > case, so that implementations can just push the whole NALU regardless
> > > > > of the format if they're lazy.
> > > > 
> > > > I realize that I didn't share our latest research on the subject. So a
> > > > slice in the original bitstream is formed of the following blocks
> > > > (simplified):
> > > > 
> > > >   [nal_header][nal_type][slice_header][slice]
> > > 
> > > Thanks for the details!
> > > 
> > > > nal_header:
> > > > This one is a header used to locate the start and the end of the of a
> > > > NAL. There is two standard forms, the ANNEX B / start code, a sequence
> > > > of 3 bytes 0x00 0x00 0x01, you'll often see 4 bytes, the first byte
> > > > would be a leading 0 from the previous NAL padding, but this is also
> > > > totally valid start code. The second form is the AVC form, notably used
> > > > in ISOMP4 container. It simply is the size of the NAL. You must keep
> > > > your buffer aligned to NALs in this case as you cannot scan from random
> > > > location.
> > > > 
> > > > nal_type:
> > > > It's a bit more then just the type, but it contains at least the
> > > > information of the nal type. This has different size on H.264 and HEVC
> > > > but I know it's size is in bytes.
> > > > 
> > > > slice_header:
> > > > This contains per slice parameters, like the modification lists to
> > > > apply on the references. This one has a size in bits, not in bytes.
> > > > 
> > > > slice:
> > > > I don't really know what is in it exactly, but this is the data used to
> > > > decode. This bit has a special coding called the anti-emulation, which
> > > > prevents a start-code from appearing in it. This coding is present in
> > > > both forms, ANNEX-B or AVC (in GStreamer and some reference manual they
> > > > call ANNEX-B the bytestream format).
> > > > 
> > > > So, what we notice is that what is currently passed through Cedrus
> > > > driver:
> > > >   [nal_type][slice_header][slice]
> > > > 
> > > > This matches what is being passed through VA-API. We can understand
> > > > that stripping off the slice_header would be hard, since it's size is
> > > > in bits. Instead we pass size and header_bit_size in slice_params.
> > > 
> > > True, there is that.
> > > 
> > > > About Rockchip. RK3288 is a Hantro G1 and has a bit called
> > > > start_code_e, when you turn this off, you don't need start code. As a
> > > > side effect, the bitstream becomes identical. We do now know that it
> > > > works with the ffmpeg branch implement for cedrus.
> > > 
> > > Oh great, that makes life easier in the short term, but I guess the
> > > issue could arise on another decoder sooner or later.
> > > 
> > > > Now what's special about Hantro G1 (also found on IMX8M) is that it
> > > > take care for us of reading and executing the modification lists found
> > > > in the slice header. Mostly because I very disliked having to pass the
> > > > p/b0/b1 parameters, is that Boris implemented in the driver the
> > > > transformation from the DPB entries into this p/b0/b1 list. These list
> > > > a standard, it's basically implementing 8.2.4.1 and 8.2.4.2. the
> > > > following section is the execution of the modification list. As this
> > > > list is not modified, it only need to be calculated per frame. As a
> > > > result, we don't need these new lists, and we can work with the same
> > > > H264_SLICE format as Cedrus is using.
> > > 
> > > Yes but I definitely think it makes more sense to pass the list
> > > modifications rather than reconstructing those in the driver from a
> > > full list. IMO controls should stick to the bitstream as close as
> > > possible.
> > > 
> > > > Now, this is just a start. For RK3399, we have a different CODEC
> > > > design. This one does not have the start_code_e bit. What the IP does,
> > > > is that you give it one or more slice per buffer, setup the params,
> > > > start decoding, but the decoder then return the location of the
> > > > following NAL. So basically you could offload the scanning of start
> > > > code to the HW. That being said, with the driver layer in between, that
> > > > would be amazingly inconvenient to use, and with Boyer-more algorithm,
> > > > it is pretty cheap to scan this type of start-code on CPU. But the
> > > > feature that this allows is to operate in frame mode. In this mode, you
> > > > have 1 interrupt per frame.
> > > 
> > > I'm not sure there is any interest in exposing that from userspace and
> > > my current feeling is that we should just ditch support for per-frame
> > > decoding altogether. I think it mixes decoding with notions that are
> > > higher-level than decoding, but I agree it's a blurry line.
> > 
> > I'm not sure ditching support for per-frame decoding would be a wise
> > decision. What if some device comes around that only supports frame
> > decoding and can't handle individual slices?
> > 
> > We have such a situation on Tegra, for example. I think the hardware can
> > technically decode individual slices, but it can also be set up to do a
> > lot more and operate in basically a per-frame mode where you just pass
> > it a buffer containing the complete bitstream for one frame and it'll
> > just raise an interrupt when it's done decoding.
> > 
> > Per-frame mode is what's currently implemented in the staging driver and
> > as far as I can tell it's also what's implemented in the downstream
> > driver, which uses a completely different architecture (it uploads a
> > firmware that processes a command stream). I have seen registers that
> > seem to be related to a slice-decoding mode, but honestly I have no idea
> > how to program them to achieve that.
> > 
> > Now the VDE IP that I'm dealing with is pretty old, but from what I know
> > of newer IP, they follow a similar command stream architecture as the
> > downstream VDE driver, so I'm not sure those support per-slice decoding
> > either. They typically have a firmware that processes command streams
> > and userspace typically just passes a single bitstream buffer along with
> > reference frames and gets back the decoded frame. I'd have to
> > investigate further to understand if slice-level decoding is supported
> > on the newer hardware.
> > 
> > I'm not familiar with any other decoders, but per-frame decoding doesn't
> > strike me as a very exotic idea. Excluding such decoders from the ABI
> > sounds a bit premature.
> 
> It would be premature to state that we are excluding. We are just
> trying to find one format to get things upstream, and make sure we have
> a plan how to extend it. Trying to support everything on the first try
> is not going to work so well.

Okay that sounds reasonable. I must have misinterpreted what you were
discussing. Sorry.

> What is interesting to provide is how does you IP achieve multi-slice
> decoding per frame. That's what we are studying on the RK/Hantro chip.
> Typical questions are:
> 
>   1. Do all slices have to be contiguous in memory

All of the systems that integrate VDE have an SMMU, though on many of
them that SMMU is very limited (on one generation of Tegra it's really
only a GART and on others the number of virtual address spaces is so
small that it's not always practical to rely on the SMMU). So if SMMU
support is enabled, then slices can be scattered in memory, but they
will have to be I/O virtually contiguous. The VDE itself does not
support SG.

>   2. If 1., do you place start-code, AVC header or pass a seperate index to let the HW locate the start of each NAL ?

My understanding is that there's a "syntax engine" whose job it is to
parse the bitstream that you point it at (using the "bitstream engine"
to extract individual elements). The syntax elements parsed are used to
control the "macro-block engine" via a set of commands. The syntax
engine needs the start-code in order to work and will generate an error
ohterwise. I haven't come across a way to disable this, so it looks like
the start code is always required. Or I should say, the decoder always
requires Annex B format. This also happens to be what for example VDPAU
will generate. I suppose it's a fairly natural choice, given that that's
the byte stream format recommended by the H.264 standard.

>   3. Does the HW do support single interrupt per frame (RK3288 as an example does not, but RK3399 do)

Yeah, we definitely do get a single interrupt at the end of a frame, or
when an error occurs. Looking a bit at the register documentation it
looks like this can be more fine-grained. We can for example get an
interrupt at the end of a slice or a row of macro blocks.

> And other things like this. The more data we have, the better the
> initial interface will be.
> 
> > 
> > > > But it also support slice mode, with an
> > > > interrupt per slice, which is what we decided to use.
> > > 
> > > Easier for everyone and probably better for latency as well :)
> > 
> > I'm not sure I understand what's easier about slice-level decoding or
> > how this would improve latency. If anything getting less interrupts is
> > good, isn't it?
> > 
> > If we can offload more to hardware, certainly that's something we want
> > to take advantage of, no?
> 
> In H.264, pretty much all stream have single slice per frame. That's
> because it gives the highest quality. But in live streaming, like for
> webrtc, it's getting more common to actually encode with multiple
> slices (it's group of macroblocks usually in raster order). Usually
> it's a very small amount of slices, 4, 8, something in this range.
> 
> When a slice is encoded, the encoder will let it go before it starts
> the following, this allow network transfer to happen in parallel of
> decoding.
> 
> On the receiver, as soon as a slice is available, the decoder will be
> started immediately, which allow the receiving of buffer and the
> decoding of the slices to happen in parallel. You end up with a lot
> less delay between the reception of the last slice and having a full
> frame ready.

Okay, that clarifies things. I'm not sure I fully agree with "a lot less
delay". Hardware decoders are usually capable of decoding in realtime so
in most cases I would expect the decoder latency to be somewhere on the
order of 16-40 ms, and network latency can't be much higher than that to
ensure smooth playback, so worst case the total latency should be on the
order of 32-80 ms. Even assuming 100 ms worst case latency, that's not
too bad in my experience. Unless you're aiming for some application like
game streaming, in which case you'd be more on the lower end of that
range anyway because of the required framerate.

Anyway, I'm not trying to argue that slice-level decoding is a bad thing
or unnecessary. I'm merely trying to point out that for many use-cases
frame-level decoding is more than good enough for people's needs.

> So that's how slices are used to reduce latency. Now, if you are
> decoding from a container like ISOMP4, you'll have full frame, so it
> make sense to queue all these frame, and le the decoder bundle that if
> possible, if the HW allow to enable mode where you have single IRQ per
> frame. Though, it's pretty rare that you'll find such a file with
> slices. What we'd like to resolve is how these are resolved. There is
> nothing that prevents it right now in the uAPI, but you'd have to copy
> the input into another buffer, adding the separators if needed.
> 
> What we are trying to achieve in this thread is to find a compromise
> that makes uAPI sane, but also makes decoding efficient on all the HW
> we know at least.

It's been some time since I looked at this in detail, but my
recollection is that things like MPEG TS use what is basically the annex
B byte stream format. On the other hand I recall that ffmpeg has a
filter that can be used to add a start code if the input stream doesn't
have one (e.g. if you are playing back from an MP4 container) but the
decoder requires one (e.g. VDPAU). I'm not familiar with VAAPI or things
like gstreamer, but I suspect that they have something similar in place.
Perhaps somebody with more knowledge of those can share their wisdom. If
there are any commonalities between all of those maybe that could serve
as guidance on what a V4L2 interface should be providing in terms of
input format.

Naively I would consider more information (rather than less) easier to
deal with. If you have more information than necessary it's usually
pretty easy to skip it (hardware may already be able to do so, or you
can rewrite some pointer/offset to do that). On the other hand, if you
have too little information it's not always easy to add it. I guess you
could argue that it's not a big issue for something like a start code,
but it still means you have to concatenate in order to prepend the data,
which usually means you need a copy in software if you don't have SG
capabilities.

Of course I may be somewhat biased because this happens to coincide with
what VDE expects...

Thierry

Attachment: signature.asc
Description: PGP signature


[Index of Archives]     [Linux Input]     [Video for Linux]     [Gstreamer Embedded]     [Mplayer Users]     [Linux USB Devel]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [Yosemite Backpacking]

  Powered by Linux