Re: [RFC] Motion Detection API

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Sylwester,

My apologies for the delayed answer.

On Sun, Jun 09, 2013 at 07:56:23PM +0200, Sylwester Nawrocki wrote:
> On 06/03/2013 03:25 AM, Sakari Ailus wrote:
> >On Wed, May 22, 2013 at 11:41:50PM +0200, Sylwester Nawrocki wrote:
> >>[...]
> >>>>>I'm in favour of using a separate video buffer queue for passing
> >>>>>low-level
> >>>>>metadata to user space.
> >>>>
> >>>>Sure. I certainly see a need for such an interface. I wouldn't like to
> >>>>see it
> >>>>as the only option, however. One of the main reasons of introducing
> >>>>MPLANE
> >>>>API was to allow capture of meta-data. We are going to finally prepare
> >>>>some
> >>>>RFC regarding usage of a separate plane for meta-data capture. I'm not
> >>>>sure
> >>>>yet how it would look exactly in detail, we've just discussed this topic
> >>>>roughly with Andrzej.
> >>>
> >>>I'm fine that being not the only option; however it's unbeatable when it
> >>>comes to latencies. So perhaps we should allow using multi-plane buffers
> >>>for the same purpose as well.
> >>>
> >>>But how to choose between the two?
> >>
> >>I think we need some example implementation for metadata capture over
> >>multi-plane interface and with a separate video node. Without such
> >>implementation/API draft it is a bit difficult to discuss this further.
> >
> >Yes, that'd be quite nice.
> 
> I still haven't found time to look into that, got stuck with debugging some
> hardware related issues which took much longer than expected..

Any better luck now? :-) :-)

> >There are actually a number of things that I think would be needed to
> >support what's discussed above. Extended frame descriptors (I'm preparing
> >RFC v2 --- yes, really!) are one.
> 
> Sounds great, I'm really looking forward to improving this part and
> having it
> used in more drivers.
> 
> >Also creating video nodes based on how many different content streams there
> >are doesn't make much sense to me. A quick and dirty solution would be to
> >create a low level metadata queue type to avoid having to create more video
> >nodes. I think I'd prefer a more generic solution though.
> 
> Hmm, does it mean having multiple buffer queues on a video device node,
> similarly to, e.g. the M2M interface ? Not sure if it would have been a bad

Yes; the metadata and the images would arrive through the same video node
but a different buffer queue. This way creating new video nodes based on
whether metadata exists or not can be avoided.

But just creating a single separate metadata queue type is slightly hackish:
there can be multiple metadata regions in the frame and the sensor can also
produce a JPEG image (albeit I'd like to consider them rare; I've never
worked on one myself).

> idea at all. The number of video/subdev nodes can get ridiculously high in
> case of more complex devices. For example in case of the Samsung Exynos
> SoC imaging subsystem the total number of various device nodes is getting
> near *30*, and it is going to be at least that many for sure once all
> functionality is covered.
> 
> So one video node per a DMA engine is probably fair rule, but there might
> be reasons to avoid adding more device nodes for covering "logical" streams.

The number in my opinion isn't an issue, but it would be an issue if devices
appear and disappear dynamically based on e.g. sensor configuration.

-- 
Cheers,

Sakari Ailus
e-mail: sakari.ailus@xxxxxx	XMPP: sailus@xxxxxxxxxxxxxx
--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Linux Input]     [Video for Linux]     [Gstreamer Embedded]     [Mplayer Users]     [Linux USB Devel]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [Yosemite Backpacking]
  Powered by Linux