Re: [PATCH RFC] V4L: Add s_rx_buffer subdev video operation

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Sakari,

On 09/26/2012 10:49 PM, Sakari Ailus wrote:
>>>> On 09/24/2012 03:44 PM, Sakari Ailus wrote:
>>>>> How about useing a separate video buffer queue for the purpose? 
>>>>> That would
>>>>> provide a nice way to pass it to the user space where it's needed. 
>>>>> It'd also
>>>>> play nicely together with the frame layout descriptors.
>>>>
>>>> It's tempting, but doing frame synchronisation in user space in this 
>>>> case
>>>> would have been painful, if at all possible in reliable manner. It 
>>>> would
>>>> have significantly complicate applications and the drivers.
>>>
>>> Let's face it: applications that are interested in this information 
>>> have to
>>> do exactly the same frame number matching with the statistics 
>>> buffers. Just
>>> stitching the data to the same video buffer isn't a generic solution.
>>
>> Let me list disadvantages of using separate buffer queue:
>>
>> 1. significant complication of the driver:
>> - need to add video node support with all it's memory and file ops,
> 
> That's not much of code since the driver already does it once.

The driver is currently split into 3 separate modules, some of them 
are used in one SoC series, some in the other. It's all relative but 
I estimate adding a video capture node driver to this subdev driver 
would have increased LOC by 100%. New SoCs are adding more and more 
new modules so keeping everything as simple as possible isn't 
meaningless.

>> - more complicated VIDIOC_STREAMON logic, MIPI CSI receiver needs to
>> care about the data pipeline details (power, streaming,..);
> 
> Power management complication is about non-existent, streaming likely 
> require ensuring that STREAMON IOCTL has been issued on both before 
> streaming is really started.

Sorry, my concern here was mostly about the kernel side, e.g. if we
want to use meta data without actual image data the sensor must be
powered on and streaming. I agree this is mostly an implementation 
issue, but likely needs some thought and time to get it right.

>> 2. more processing overhead due second /dev/video handling;
> 
> True.
> 
>> 3. much more complex device handling in user space.
> 
> Somewhat, yes.
> 
>> All this for virtually nothing but 2 x 4-byte integers we are interested
>> in in the Embedded Data stream.
>> And advantages:
>>
>> 1. More generic solution, no need to invent new fourcc's for standard 
>> image
>> data formats with metadata (new fourcc is needed anyway for the device-
>> specific image data (JPEG/YUV/.../YUV/JPEG/meta-data, and we can choose
>> to use multi-planar only for non-standard formats and separate meta-data
>> buffer queue for others);
>> 2. Probably other host IF/ISP drivers would implement it this way, or 
>> would
>> they ?
>> 3. What else could be added ?
> 
> One more advantage is that you'll get the metadata immediately to the 
> user space after it's been written to the system memory; no need to wait 
> for the rest of the frame. That may be interesting sometimes.

Agreed.

>> Currently I don't see justification for using separate video node as the
>> frame embedded frame grabber. I don't expect it to be useful for us in
>> future, not for the ISPs that process sensor data separately from the 
>> host
>> CPUs. Moreover, this MIPI-CSIS device has maximum buffer size of only 
>> 4 KiB,
>> which effectively limits its possible use cases.
> 
> That's your hardware. Most CSI-2 receivers I've seen either write the 
> metadata to the same buffer or to a different memory area equivalent to 
> the video buffer where the image is stored.

Yes, I'm aware of that. :)

>> I don't think there is a need to force host drivers to use either 
>> separate
>> buffer queues or multi-planar APIs. Especially in case of non-standard 
>> hybrid
>> data formats. I'm ready to discuss separate buffer queue approach if 
>> we have
>> real use case for it. I don't think these two methods are exclusive.
> 
> I agree with that. It should be made possible for the user to decide 
> which one to use. The hardware may also limit possibilities since if it 
> just recognises a single buffer, there's not much that the software can 
> do in that case.
> 
> Multi-plane buffers are the only option in that case I guess.
> 
>> Until then I would prefer not to live with an awkward solution.
> 
> I think it would be good to be able to have plane-specific formats on 
> multi-plane buffers. Thus we wouldn't end up having a new pixel format 
> out of every metadata / image format pair. And there will be many, many 
> of those and the applications certainly don't want to care about the 
> combinations themselves.

Yes, I agree in 200%. This is going to be needed sooner or later.
With current definition of multi-planar formats it seems not possible
to have same fourcc with an attached meta data plane or not.

>>>> VIDIOC_STREAMON, VIDIOC_QBUF/DQBUF calls would have been at least 
>>>> roughly
>>>> synchronized, and applications would have to know somehow which 
>>>> video nodes
>>>> needs to be opened together. I guess things like that could be 
>>>> abstracted
>>>> in a library, but what do we really gain for such effort ?
>>>> And now I can just ask kernel for 2-planar buffers where everything 
>>>> is in
>>>> place..
>>>
>>> That's equally good --- some hardware can only do that after all, but 
>>> do you
>>> need the callback in that case, if there's a single destination buffer
>>> anyway? Wouldn't the frame layout descriptor have enough information 
>>> to do
>>> this?
>>
>> There is as many buffers as user requested with REQBUFS. In VSYNC 
>> interrupt
> 
> I meant separately allocated and mapped memory areas related to a single 
> frame.

Oh, you mean the relation between those two memory areas is constant ?

It doesn't help since the subdev driver have no knowledge of any buffers
handled by the (separate) DMA engine driver.

>> of one device there is a buffer configured for the other device. With 
>> each
>> frame interrupt there is a different buffer used, the one that the DMA 
>> engine
>> actually writes data to. Data copying happens from the MIPI-CSIS internal
>> ioremapped buffer to a buffer owned by the host interface driver. And the
>> callback is used for dynamically switching buffers at the subdev.
> 
> So... your CSI-2 receiver has got a small internal memory where the 
> metadata can be written? That's certainly a novel solution. :-)

lol :) Yes, you see now, it's hard to treat this kind of hardware as
a reference for developing generic APIs... :/

This sub-device (it's basically just a MIPI-CSI2 front-end) is wired on
the APB bus (simple and slow) and some internal interconnect bus to FIMC
(DMA engine). It seems to not implement the AXI bus interface. There can't
be much throughput for the Embedded Data DT, since it is captured over
the slow APB bus (the same that is used for accessing the device control
registers).

> I still don't quite understand the need for the callback. First of all, 
> did I understand correctly that a driver for different hardware than 
> than the one in the memory of which the metadata actually resides would 
> copy the contents of this memory to a multi-plane video buffer that it 
> eventually passes to user space?

Yes, that's correct. So how would you let know the driver that captures
meta data to its internal (SRAM?) small buffers where to dump their 
content so it is not overwritten with a data from subsequent frame ?

--

Regards,
Sylwester
--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux Input]     [Video for Linux]     [Gstreamer Embedded]     [Mplayer Users]     [Linux USB Devel]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [Yosemite Backpacking]
  Powered by Linux