[ANN v3] Media sessions in Lyon in October: codecs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



(Updated Maxime's email address in v3)

Hi all,

Since we have three separate half-day sessions for different topics I decided
to split the announcement for this in three emails as well, so these things
can be discussed in separate threads.

All sessions are in room Terreaux VIP Lounge - Level 0.
There is a maximum of 15 people.

The first session deals with the codec API and is on Tuesday morning from
8:30 to 12:00 (we have to vacate the room at that time). Note that 8:30
start time!

Confirmed attendees for this session:

Boris Brezillon <boris.brezillon@xxxxxxxxxxxxx>
Alexandre Courbot <acourbot@xxxxxxxxxxxx>
Nicolas Dufresne <nicolas@xxxxxxxxxxxx>
Tomasz Figa <tfiga@xxxxxxxxxxxx>
Ezequiel Garcia <ezequiel@xxxxxxxxxxxxx>
Dafna Hirschfeld <dafna.hirschfeld@xxxxxxxxxxxxx>
Paul Kocialkowski <paul.kocialkowski@xxxxxxxxxxx>
Maxime Ripard <maxime.ripard@xxxxxxxxxxx>
Dave Stevenson <dave.stevenson@xxxxxxxxxxxxxxx>
Michael Tretter <m.tretter@xxxxxxxxxxxxxx>
Stanimir Varbanov <stanimir.varbanov@xxxxxxxxxx>
Hans Verkuil <hverkuil@xxxxxxxxx>

Please let me know asap if I missed someone, or if you are listed, but
can't join for some reason.

There are three seats left, and I have five on the 'just interested'
list:

Daniel Gomez <daniel@xxxxxxxx>
Eugen Hristev <Eugen.Hristev@xxxxxxxxxxxxx>
Helen Koike <helen.koike@xxxxxxxxxxxxx>
Jacopo Mondi <jacopo@xxxxxxxxxx>
Laurent Pinchart <laurent.pinchart@xxxxxxxxxxxxxxxx>

If you still want to join, please mail me. First come, first served :-)

Agenda:

Note: I didn't assign start times, we'll just go through these items one-by-one.

- Status of any pending patches related to codec support.
  I'll provide a list of those patches by the end of next week so we
  can go through them.

- Requirements of moving codec drivers out of staging.

- Finalize the stateful encoder API. There are two pieces that need
  to be defined:

  1) Setting the frame rate so bitrate control can make sense, since
     they need to know this information. This is also relevant for the
     stateless codec (and this may have to change on a per-frame basis
     for stateless codecs!).

     This can either be implemented via ENUM_FRAMEINTERVALS for the coded
     pixelformats plus S_PARM support, or we just add a new control for this.
     E.g. V4L2_CID_MPEG_VIDEO_FRAME_INTERVAL using struct v4l2_fract.

     I am inclined to go with a control, since the semantics don't really
     match ENUM_FRAMEINTERVALS/S_PARM. These ioctls still need to be supported
     for legacy drivers. Open question: some drivers (mediatek, hva, coda)
     require S_PARM(OUTPUT), some (venus) allow both S_PARM(CAPTURE) and
     S_PARM(OUTPUT). I am inclined to allow both since this is not a CAPTURE
     vs OUTPUT thing, it is global to both queues.

  2) Interactions between OUTPUT and CAPTURE formats.

     The main problem is what to do if the capture sizeimage is too small
     for the OUTPUT resolution when streaming starts.

     Proposal: width and height of S_FMT(OUTPUT) plus max-bitrate plus frame
     interval plus key frame interval info are used to calculate a minimum
     CAPTURE sizeimage (app may request more). This is codec-specific, I think,
     so it should be possible to provide helper functions for this.

     However, it may be quite difficult to make a good calculation. I just
     don't know enough to determine this.

     V4L2_FMT_FLAG_DYN_RESOLUTION is always cleared for codec formats
     for the encoder (i.e. we don't support mid-stream resolution
     changes for now) and V4L2_EVENT_SOURCE_CHANGE is not
     supported.

     Of course, if we start to support mid-stream resolution
     changes (or other changes that require a source change event),
     then this flag should be set by the encoder driver and
     documentation on how to handle the source change event should
     be documented in the encoder spec. I prefer to postpone this
     until we have an encoder than can actually do mid-stream
     resolution changes.

     If sizeimage of the OUTPUT is too small for the CAPTURE
     resolution and V4L2_EVENT_SOURCE_CHANGE is not supported,
     then the second STREAMON (either CAPTURE or OUTPUT) will
     return -ENOMEM since there is not enough memory to do the
     encode.

     If V4L2_FMT_FLAG_DYN_RESOLUTION is cleared (i.e. that is
     the case for all current encoders), then any bitrate controls
     will be limited in range to what the current state (CAPTURE and
     OUTPUT formats and frame interval) supports.

- Stateless encoder support

  Overall goals:

  - Find out if there is a common set of per frame encoding parameters
  - Find out if bitrate control can be reused for multiple HW
  - Decide if we do in-kernel bitrate control or not
  - Decide if we keep bitstream header crafting external (unlike Hantro
    JPEG encoder, but like VAAPI)
  - Decide if we provide (and maintain) a libv4l2 plugin like ChromeOS
    folks opted for.

  I hope Nicolas and Tomasz can prepare for this.

  The one requirement that Cisco would have for these devices is that
  we must be able to do per-frame bitrate control from userspace.

Regards,

	Hans



[Index of Archives]     [Linux Input]     [Video for Linux]     [Gstreamer Embedded]     [Mplayer Users]     [Linux USB Devel]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [Yosemite Backpacking]

  Powered by Linux