Hi All I think we need a separate buffer queue that manages those reconstruction or auxiliary buffers for those V4L2 M2M drivers. There are some drivers already allocating internal buffers as the reconstruction buffers for its encoding instance. The driver/media/platform/chips-media is one here, its coda_alloc_context_buf() would allocate the max allowed references for its instance as the reconstruction buffers. You can't control the lifetime of the reconstruction buffer here, which means you can't control which buffers would be as the reference. It may look not bad for a hardware encoder that has a control chip that could do some encoding bitrate control. For those stateless encoders, which are controlled by the user completely, it would be better to let the user decide the lifetime of a reconstruction buffer. For the SVC situation, a layer may refer to a buffer in another layer, which is encoded many times ago. I am not sure which way is better, I would implement one from the feedback. One is reusing V4L2_BUF_TYPE_VIDEO_OVERLAY, it would support REQ_BUFS, SET_FMT, GET_FMT, QBUF, and DQBUF besides the existing m2m operations. Another idea is extending those ioctls to the media node that the stateless m2m driver uses for allocation request_fd token. Please notice that the reconstruction could use a different pixel format than the one used in input frames. For example, Hantro H1 could use the NV12_4L4 in its reconstruction buffer and an FBC format in the later generation of chips-media's codecs. Also, there are some decoders having an online post-processor. This means it can't do pixel format transforming independently. The driver for those devices may need this. Sincerely Randy