Hi Hans, Laurent, On Mon, Sep 23, 2019 at 10:11:09AM +0200, Hans Verkuil wrote: > Hi Laurent, > > On 9/21/19 1:48 AM, Laurent Pinchart wrote: > > Hi Hans, > > > > On Tue, Sep 17, 2019 at 03:36:44PM +0200, Hans Verkuil wrote: > >> This is a follow-up series from Vandana's "[v3] v4l2-core: Add metadata type to > >> vfl_devnode_type" patch: https://patchwork.linuxtv.org/patch/58755/ > >> > >> While testing that v3 patch with a patched version of vivid that has metadata > >> capture support, I realized that metadata should be treated the same way as > >> vbi in determine_valid_ioctls(). That makes sense since vbi *is* effectively > >> metadata. So I changed Vandana's patch (hence my Co-Developed-by tag) to > >> correctly validate the ioctls for metadata. > >> > >> I also added two follow-up patches to simplify the SDR code in that function, > >> and to fix the code for v4l-touch devices (too many ioctls were enabled for > >> such devices). I think the final code is easier to read as well. > > > > Quoting my reply to "[RFC] V4L2 & Metadata: switch to /dev/v4l-metaX > > instead of /dev/videoX]" as it may have fell through the cracks, and I > > don't want this series to be merged without addressing the issue, > > > > One of the reason [we didn't introduce a metadata video node type] is > > CSI-2 sensors and CSI-2 receivers. A CSI-2 link often carries both video > > and metadata using two different datatypes. From the point of view of > > the CSI-2 receiver or the DMA engines, video and metadata are not > > distinguishable, the CSI-2 receiver receives one stream with two data > > types, demultiplexes them, and passes them to different DMA engines. A > > sensor could send two video datatypes, or even conceptually two metadata > > data types, and this would work the exact same way, with each of the two > > DMA engines capturing data to buffers without caring about the contents. > > Given that the datatype selection happens at runtime, a given DMA engine > > 'happens at runtime': what decides this? The user-specified topology? > Something else? > > Is this documented somewhere? Yes. This ultimately depends on the formats configured by the user. Some of the formats are metadata, and with sub-stream support, these will be routable by different video nodes. I we designate video nodes either "metadata" or "pixel data" nodes, then this would need to be changed dynamically based on the format selected. I don't think it's really worth it, as the user space also doesn't expect the node type to change. > > To my knowledge there are no drivers that can do this in mainline code, > right? The current drivers all create dedicated metadata devices. Not right now, no. But it's just a question of when, not if. > > Also, this specific use-case is for capture only. Do you think this > might be needed for metadata output as well? As Laurent noted, the DMA engines don't know the data they handle, so in principle it's possible that this could be relevant for output, too. -- Regards, Sakari Ailus sakari.ailus@xxxxxxxxxxxxxxx