Hi Nicolas, On 5/1/20 5:19 PM, Nicolas Dufresne wrote: > Le jeudi 30 avril 2020 à 14:38 +0300, Stanimir Varbanov a écrit : >> Here we add two more reasons for dynamic-resolution-change state >> (I think the name is misleading espesially "resolution" word, maybe >> dynamic-bitstream-change is better to describe). >> >> The first one which could change in the middle of the stream is the >> bit-depth. For worst example the stream is 8bit at the begging but >> later in the bitstream it changes to 10bit. That change should be >> propagated to the client so that it can take appropriate action. In >> this case most probably it has to stop the streaming on the capture >> queue and re-negotiate the pixel format and start the streaming >> again. >> >> The second new reason is colorspace change. I'm not sure what action >> client should take but at least it should be notified for such change. >> One possible action is to notify the display entity that the colorspace >> and its parameters (y'cbcr encoding and so on) has been changed. > > Just to help with the use case, colorspace changes need to be > communicated to the following HW or software in your media pipeline. > Let's consider a V4L2 only use case: > > m2m decoder -> m2m color transform - >... > > The userspace needs to be aware on time, so that it can reconfigure the > color transformation parameters. The V4L2 event is a miss-fit though, > as it does not tell exactly which buffer will start having this new > colorspace. So in theory, one would have to: > > - drain > - send the new csc parameters > - resume > > I'm not sure if our drivers implement resuming after CMD_STOP, do you According to the spec, after implicit drain the decoder is stopping and the client have two options: 1. streamoff -> reconfigure queue -> streamon 2. decoder command start #2 would be the case with colorspace changes. > have information about that ? We could also go through streamoff/on > cycle in the mean time. Most codec won't allow changing these > parameters on delta frames, as it would force the decoder doing CSC > conversion of the reference frames in decode process, that seems > unrealistically complex requirement. Shouldn't such changes be preceded by IDR (or what is applicable for the codec)? > > That being said, please keep in mind that in VP9, reference frames do > not have to be of the same sizes. You can change the resolution at any > point in time. No one managed to decode the related test vectors [0] > with our current event base resolution change notification. > > [0] FRM_RESIZE https://www.webmproject.org/vp9/levels/ I'd like to try those test streams. So, If I understood your comments correctly, the colorspace change event in stateful decoder spec isn't needed? > >> >> Signed-off-by: Stanimir Varbanov <stanimir.varbanov@xxxxxxxxxx> >> --- >> Documentation/userspace-api/media/v4l/dev-decoder.rst | 6 +++++- >> 1 file changed, 5 insertions(+), 1 deletion(-) >> >> diff --git a/Documentation/userspace-api/media/v4l/dev-decoder.rst b/Documentation/userspace-api/media/v4l/dev-decoder.rst >> index 606b54947e10..bf10eda6125c 100644 >> --- a/Documentation/userspace-api/media/v4l/dev-decoder.rst >> +++ b/Documentation/userspace-api/media/v4l/dev-decoder.rst >> @@ -906,7 +906,11 @@ reflected by corresponding queries): >> >> * visible resolution (selection rectangles), >> >> -* the minimum number of buffers needed for decoding. >> +* the minimum number of buffers needed for decoding, >> + >> +* bit-depth of the bitstream has been changed, >> + >> +* colorspace (and its properties) has been changed. >> >> Whenever that happens, the decoder must proceed as follows: >> > -- regards, Stan