Re: V4L2 M2M driver architecture question for a new hardware

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello Karthik

On Wed, Oct 12, 2022 at 10:59:50PM -0700, Karthik Poduval wrote:
> Hi All,
>
> I have hardware that does some sort of image manipulation. The
> hardware takes 2 inputs.
> - image buffer
> - config param buffer
> and generates one output which is also an image buffer.
> The input and output images formats fall under standard image
> definitions of V4L2 like various YUV/RGB formats (interleaved or
> multiplanar).
>
> The config param buffer is kind of like a set of instructions for the
> hardware that needs to be passed with every input and output image
> which tells the hardware how to process the image.
> The hardware will be given different input images and output images
> every time and possibly different config param buffers too (in some
> cases). The config param buffers may have variable sizes too based on
> the nature of processing for that frame, but input and output images
> are fixed in size for a given context. I should also mention that the
> config param buffers are a few KBs in size so zero copy is a
> requirement. The config params buffers are written by userspace
> (possibly also driver in kernel space) and read by hardware.
>

This sounds very much how a regular M2M ISP driver works. I can't tell
about codecs as I'm no expert there, but I expect them to be similar,
so your use case is covered by existing drivers.

> Here were two mechanisms I had in mind while trying to design a V4L2
> M2M driver for this hardware.
> - Use a custom multiplanar input format where one plane is a config
> param buffer with remaining planes for input images (in case the input
> image is also multiplanar).

If you're wondering how to pass parameters to the HW I suggest to
consider registering an output video device node, where you simply
queue buffers with your parameters to.

Your HW could be modeled as a single subdevice with 3 video device
nodes, one output device for input images, one output device for
parameters, and one capture device for output images.

                   +-----------+
       +----+      | HW subdav |      +------+
       | In | ---> 0           0  --> | out  |
       +----+      |           |      +------+
                   +-----0-----+
                         ^
                         |
                     +--------+
                     | params |
                     +--------+

The parameters buffer can be of modeled using the v4l2_meta_format[1]
interface. The data format of the buffer could be defined as a custom
metadata format, you can see examples here [2]

[1] https://www.kernel.org/doc/html/latest/userspace-api/media/v4l/dev-meta.html#c.v4l2_meta_format
[2] https://www.kernel.org/doc/html/latest/userspace-api/media/v4l/meta-formats.html#meta-formats

I suggest to look at the IPU3 and RkISP1 drivers for reference.

> - Use dmabuf heaps to allocate config param buffer. Tie this config
> param buffer fd to an input buffer (using request API). Driver would
> have to attach the config param buffer dmabuf fd, use it and detach.
>

You should be able to easily allocate buffers in the video device as
you would easily do and export them as dmabuf fds by using
VIDIOC_EXPBUF [3].

Once you have them you can map them in your application code and
write their content.

[3] https://www.kernel.org/doc/html/latest/userspace-api/media/v4l/vidioc-expbuf.html

> Any comments/concerns about the above two mechanisms ?
> Any other better ideas ?
> Are there any existing V4L2 M2M mechanisms present to deal with per
> frame param buffers that are also zero copy ?
> Is the media request API able to do zero copy for setting compound
> controls for large (several KBs) compound controls ? (making the above
> dmabuf heap approach unnecessary)

Now, all the above assumes your parameters buffer is modeled as a
structure of parameters (and possibly data tables). If you are instead
looking at something that can be modeled through controls you might
have better guidance by looking at how codecs work, but there I can't
help much ;)

Hope it helps
   j
>
> --
> Regards,
> Karthik Poduval



[Index of Archives]     [Linux Input]     [Video for Linux]     [Gstreamer Embedded]     [Mplayer Users]     [Linux USB Devel]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [Yosemite Backpacking]

  Powered by Linux