Le lundi 07 octobre 2019 à 13:44 +0200, Ricardo Ribalda Delgado a écrit : > HI Tomasz > > On Mon, Oct 7, 2019 at 11:43 AM Tomasz Figa <tfiga@xxxxxxxxxxxx> wrote: > > Hi Ricardo, > > > > On Mon, Oct 7, 2019 at 6:22 PM Ricardo Ribalda Delgado > > <ricardo.ribalda@xxxxxxxxx> wrote: > > > HI Nicolas > > > > > > Sorry to hijack the thread. Do you know if anyone at AMD is working on > > > making a v4l driver for the encoder? Or they want to continue with > > > their mesa approach? > > > > > > Converting a mesa-vappi to v4l is something doable by the mortals? > > > Just changing the API or is a complete rewrite of the code? > > > > Do you know what kind of hardware that is? > > AMD VCE > > https://en.wikipedia.org/wiki/Video_Coding_Engine > > > > Is it a fully integrated smart encoder that manages everything > > internally or a "simplified" one like Rockchip or Intel, which need a > > lot of assistance from the software, like bitrate control and > > bitstream assembly? > > For what I can read from the documentation it looks more like the > intel, with plenty of knobs to play around, and support for custom > motion estimation and all the other fancy stuff. > > > Also, is the encoder an integral part of the GPU or a distinct block > > that can operate independently of the GPU driver? While it should be > > possible to chain a V4L2 driver of the AMDGPU DRM driver, the VAAPI > > model is kind of established for encoders that are closely tied with > > the GPU. > > My concern about vaapi is the complexity of the stack, to "simply" > encode a video you need mesa and llvm. It would be nicer with a v4l2 > m2m driver and gstreamer.... But i can see that it can get complicated > if the vce shares resources with the other parts of the gpu. Best is to grab someone working on this in Mesa or at AMD. The GPU based accelerators often uses shaders to complete the work. And shaders need to be compiled, hence the you need LLVM or ACO. regards, Nicolas > > > Thanks! > > > > Best regards, > > Tomasz > > > > > Best regards! > > > > > > On Mon, Oct 7, 2019 at 2:05 AM Nicolas Dufresne <nicolas@xxxxxxxxxxxx> wrote: > > > > Le jeudi 26 septembre 2019 à 19:21 +0900, Tomasz Figa a écrit : > > > > > On Mon, Sep 23, 2019 at 11:13 PM Hans Verkuil <hverkuil@xxxxxxxxx> wrote: > > > > > > Hi all, > > > > > > > > > > > > Since we have three separate half-day sessions for different topics I decided > > > > > > to split the announcement for this in three emails as well, so these things > > > > > > can be discussed in separate threads. > > > > > > > > > > > > All sessions are in room Terreaux VIP Lounge - Level 0. > > > > > > There is a maximum of 15 people. > > > > > > > > > > > > The first session deals with the codec API and is on Tuesday morning from > > > > > > 8:30 (tentative, might change) to 12:00 (we have to vacate the room at that > > > > > > time). > > > > > > > > > > > > Confirmed attendees for this session: > > > > > > > > > > > > Boris Brezillon <boris.brezillon@xxxxxxxxxxxxx> > > > > > > Alexandre Courbot <acourbot@xxxxxxxxxxxx> > > > > > > Nicolas Dufresne <nicolas@xxxxxxxxxxxx> > > > > > > Tomasz Figa <tfiga@xxxxxxxxxxxx> > > > > > > Ezequiel Garcia <ezequiel@xxxxxxxxxxxxx> > > > > > > Daniel Gomez <daniel@xxxxxxxx> > > > > > > Dafna Hirschfeld <dafna.hirschfeld@xxxxxxxxxxxxx> > > > > > > Eugen Hristev <Eugen.Hristev@xxxxxxxxxxxxx> > > > > > > Paul Kocialkowski <paul.kocialkowski@xxxxxxxxxxx> > > > > > > Helen Koike <helen.koike@xxxxxxxxxxxxx> > > > > > > Michael Tretter <m.tretter@xxxxxxxxxxxxxx> > > > > > > Hans Verkuil <hverkuil@xxxxxxxxx> > > > > > > > > > > > > Tentative: > > > > > > > > > > > > Laurent Pinchart <laurent.pinchart@xxxxxxxxxxxxxxxx> > > > > > > Jacopo Mondi <jacopo@xxxxxxxxxx> > > > > > > > > > > > > Jacopo, please confirm if you want to attend this session. I didn't find > > > > > > an email with explicit confirmation, so it was probably done via irc. But since > > > > > > this session is getting close to capacity I would prefer to keep attendance to > > > > > > those are actually working with codecs (or will work with it in the near future). > > > > > > > > > > > > If I missed someone, or you are on the list but won't attend after all, then > > > > > > please let me know. > > > > > > > > > > > > > > > > > > > > > > > > Agenda: > > > > > > > > > > > > - Status of any pending patches related to codec support. > > > > > > > > > > > > - Requirements of moving codec drivers out of staging. > > > > > > > > > > > > - Finalize the stateful encoder API. There are two pieces that need > > > > > > to be defined: > > > > > > > > > > > > 1) Setting the frame rate so bitrate control can make sense, since > > > > > > they need to know this information. This is also relevant for the > > > > > > stateless codec (and this may have to change on a per-frame basis > > > > > > for stateless codecs!). > > > > > > > > > > > > This can either be implemented via ENUM_FRAMEINTERVALS for the coded > > > > > > pixelformats and S_PARM support, or we just add a new control for this. > > > > > > E.g. V4L2_CID_MPEG_VIDEO_FRAME_INTERVAL (or perhaps FRAME_RATE). If we > > > > > > go for a control, then we need to consider the unit. We can use a > > > > > > fraction as well. See this series that puts in the foundation for that: > > > > > > https://patchwork.linuxtv.org/cover/58857/ > > > > > > > > > > > > I am inclined to go with a control, since the semantics don't really > > > > > > match ENUM_FRAMEINTERVALS/S_PARM. These ioctls still need to be supported > > > > > > for legacy drivers. Open question: some drivers (mediatek, hva, coda) > > > > > > require S_PARM(OUTPUT), some (venus) allow both S_PARM(CAPTURE) and > > > > > > S_PARM(OUTPUT). I am inclined to allow both since this is not a CAPTURE > > > > > > vs OUTPUT thing, it is global to both queues. > > > > > > > > > > > > 2) Interactions between OUTPUT and CAPTURE formats. > > > > > > > > > > > > The main problem is what to do if the capture sizeimage is too small > > > > > > for the OUTPUT resolution when streaming starts. > > > > > > > > > > > > Proposal: width and height of S_FMT(OUTPUT) are used to > > > > > > calculate a minimum sizeimage (app may request more). This is > > > > > > driver-specific. (Is it? Or is this codec-specific?) > > > > > > > > > > > > V4L2_FMT_FLAG_FIXED_RESOLUTION is always set for codec formats > > > > > > for the encoder (i.e. we don't support mid-stream resolution > > > > > > changes for now) and V4L2_EVENT_SOURCE_CHANGE is not > > > > > > supported. See https://patchwork.linuxtv.org/patch/56478/ for > > > > > > the patch adding this flag. > > > > > > > > > > > > Of course, if we start to support mid-stream resolution > > > > > > changes (or other changes that require a source change event), > > > > > > then this flag should be dropped by the encoder driver and > > > > > > documentation on how to handle the source change event should > > > > > > be documented in the encoder spec. I prefer to postpone this > > > > > > until we have an encoder than can actually do mid-stream > > > > > > resolution changes. > > > > > > > > > > > > If sizeimage of the OUTPUT is too small for the CAPTURE > > > > > > resolution and V4L2_EVENT_SOURCE_CHANGE is not supported, > > > > > > then the second STREAMON (either CAPTURE or OUTPUT) will > > > > > > return -ENOMEM since there is not enough memory to do the > > > > > > encode. > > > > > > > > > > > > If V4L2_FMT_FLAG_FIXED_RESOLUTION is set (i.e. that should > > > > > > be the case for all current encoders), then any bitrate controls > > > > > > will be limited in range to what the current state (CAPTURE and > > > > > > OUTPUT formats and frame rate) supports. > > > > > > > > > > > > - Stateless encoders? > > > > > > > > > > This could indeed be a good topic. The hantro driver currently only > > > > > supports JPEG encoding, but the hardware also supports H.264 and VP8 > > > > > at least. It, however, handles only the core parts of the encoding, > > > > > i.e. generating the actual encoded frames (slices) without headers. It > > > > > also doesn't do any bitrate control or scene change detection on its > > > > > own and expects quality parameters (QP) or keyframe requests to come > > > > > from the software. > > > > > > > > > > I'm not sure if there is any other hardware with similar constraints > > > > > that could use V4L2, but I believe the Intel video encoder supported > > > > > by VAAPI has a similar design. > > > > > > > > I'll try to gather some information next week about that to prepare a > > > > little. As of now, we have the Rockchip mpp library and the ChromeOS > > > > version (which is reusing the previous one code). Then the Intel and > > > > AMD VAAPI drivers (which is implemented in FFMPEG and GStreamer). > > > > > > > > Maybe Paul can provide some known information about the CEDRUS (if > > > > any), even though this is probably harder to gather. We can also study > > > > software encoders like OpenH264, x264, libvpx etc. to see if there is a > > > > trend of parameters between the state manager and the low level > > > > encoders. > > > > > > > > Overall goal are, I believe (draft): > > > > - Find out if there is a common set of per frame encoding parameter > > > > - Find out if bitrate control can be reused for multiple HW > > > > - Decide if we do in-kernel bitrate control or not > > > > - Decide if we keep bitstream header crafting external (unlike Hantro > > > > JPEG encoder. but like VAAPI) > > > > - Decide if we provide (and maintain) a libv4l2 plugin like ChromeOS > > > > folks opted for. > > > > > > > > > Best regards, > > > > > Tomasz > > > > > > > > > > > - Anything else? (I have a feeling I missed a codec-related topic, but > > > > > > I can't find it in my mailbox) > > > > > > > > > > > > Regards, > > > > > > > > > > > > Hans > > > > > > -- > > > Ricardo Ribalda > > > -- > Ricardo Ribalda