Hello list, this series serves as a base for the forthcoming discussion at the Linux Media Summit in Vienna on Monday. The series is sent as RFC but has been tested and developed on a real HW platform, the RaspberryPi 5 PiSP Back End. I'm not going here in great length on the background and motivations of the series, as it will be the subject of Monday discussion, but here it is a bit of background, by reporting the abstract presented for the Media Summit. ------------------------------------------------------------------------------- Modern ISPs are designed to handle multiple "streams" of data, not necessarily related to the same image or video stream. The hardware resources are generally time-multiplexed between different execution contexts at the hardware or firmware level and in order to operate the ISP with multiple video sources it is necessary for drivers to keep track of per-context data and resources. In V4L2 the M2M framework supports multiple contexts through multiple opens of the same video device. This doesn't however support drivers exposing multiple video devices and sub-devices. Several out-of-tree drivers implement multi-context support by registering multiple 'logical' instances of the same media graph, one for each context. This effectively multiplies the number of video device nodes and subdevice nodes. Userspace applications open one media graph instance and operate on the corresponding video devices, under the impression of dealing with a dedicated instance of the sole underlying hardware resource. This solution is however a short term hack, it doesn't scale well when the number of contexts grow. ISPs such as the Mali C55 have been designed to process 8 cameras concurrently, and other ISPs may do more. For this reason, a solution to expose and manage multiple execution contexts without duplicating the number of media, video and sub-devices registered to userspace is needed to improve support for multi-context devices in V4L2. This will also be useful for codecs that need more than an output and a capture video queue. -------------------------------------------------------------------------------- The series enables userspace to multiplex the usage of a media device and of video devices without duplicating the number of devnodes in userspace, by introducing the following concept in the framework: - Media Device Context: a context created at media-device open time and stored in the media-fh file handle. To a media device context is associated a list of media entity contexts which are 'bound' to it. - Video Device Context (and V4L2 Subdevice Context, not implemented in this RFC) represents an isolated execution context of a video device. By storing the data and the configuration of a video device userspace is allowed to effectively multiplex the usage of a device node. Both the Video Device Context and the V4L2 Subdevice Context extend the Media Entity Context base type so that the MC and V4L2 layers are kept independent one from each other. - A Video Device Context is created by a new ioctl VIDIOC_BIND_CONTEXT and is stored in the v4l2-fh file handle. The VIDIOC_BIND_CONTEXT associates a Video Device Context to a Media Device Context. By binding a set of video devices and subdevices to a media device context userspace can create several isolated 'execution contexts' which can be operated independently one from each other. The implementation has been tested by porting the PiSP BE driver in the last patch to showcase the newly introduced driver API. The implementation has been tested with a slightly modified version of libcamera with 2 concurrent camera streams running in parallel. Albeit I'm listed as the author of the here proposed implementation the majority of the design (and several private review rounds) have to be attributed to Laurent which has a much more profound understanding of the framework and its future evolution than me. Thanks for the guidance and the several comments and discussions. The series is based on Sakari's "PATCH v4 00/26] Media device lifetime management" which introduces media-fh.c and a branch based on rpi's v6.6.y used for testing is available at https://gitlab.freedesktop.org/linux-media/users/jmondi/-/tree/multicontext/rpi-6.6.y/v1 For any question, see you on Monday in Vienna! Jacopo Mondi (10): media: media-entity: Introduce media_entity_context media: media-device: Introduce media_device_context media: v4l2-dev: Introduce video device context media: v4l2-ioctl: Introduce VIDIOC_BIND_CONTEXT media: Introduce default contexts media: v4l2-dev: Add video_device_context_from_file() media: v4l2-dev: Add video_device_context_from_queue() videobuf2-v4l2: Support vb2_queue embedded in a context media: media-entity: Validate context in pipeline start media: pispbe: Add support for multi-context .../media/common/videobuf2/videobuf2-v4l2.c | 129 +++-- drivers/media/mc/mc-device.c | 179 ++++++ drivers/media/mc/mc-entity.c | 136 ++++- .../platform/raspberrypi/pisp_be/pisp_be.c | 509 +++++++++++++----- drivers/media/v4l2-core/v4l2-dev.c | 141 ++++- drivers/media/v4l2-core/v4l2-fh.c | 1 + drivers/media/v4l2-core/v4l2-ioctl.c | 64 +++ include/media/media-device.h | 215 ++++++++ include/media/media-entity.h | 141 ++++- include/media/media-fh.h | 4 + include/media/v4l2-dev.h | 235 ++++++++ include/media/v4l2-fh.h | 3 + include/media/v4l2-ioctl.h | 5 + include/uapi/linux/videodev2.h | 11 + 14 files changed, 1576 insertions(+), 197 deletions(-) -- 2.46.0