Re: [PATCH v4 14/36] [media] v4l2-mc: add a function to inherit controls from a pipeline

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sat, Mar 11, 2017 at 08:25:49AM -0300, Mauro Carvalho Chehab wrote:
> This situation is there since 2009. If I remember well, you tried to write
> such generic plugin in the past, but never finished it, apparently because
> it is too complex. Others tried too over the years. 
> 
> The last trial was done by Jacek, trying to cover just the exynos4 driver. 
> Yet, even such limited scope plugin was not good enough, as it was never
> merged upstream. Currently, there's no such plugins upstream.
> 
> If we can't even merge a plugin that solves it for just *one* driver,
> I have no hope that we'll be able to do it for the generic case.

This is what really worries me right now about the current proposal for
iMX6.  What's being proposed is to make the driver exclusively MC-based.

What that means is that existing applications are _not_ going to work
until we have some answer for libv4l2, and from what you've said above,
it seems that this has been attempted multiple times over the last _8_
years, and each time it's failed.

When thinking about it, it's quite obvious why merely trying to push
the problem into userspace fails:

  If we assert that the kernel does not have sufficient information to
  make decisions about how to route and control the hardware, then under
  what scenario does a userspace library have sufficient information to
  make those decisions?

So, merely moving the problem into userspace doesn't solve anything.

Loading the problem onto the user in the hope that the user knows
enough to properly configure it also doesn't work - who is going to
educate the user about the various quirks of the hardware they're
dealing with?

I don't think pushing it into platform specific libv4l2 plugins works
either - as you say above, even just trying to develop a plugin for
exynos4 seems to have failed, so what makes us think that developing
a plugin for iMX6 is going to succeed?  Actually, that's exactly where
the problem lies.

Is "iMX6 plugin" even right?  That only deals with the complexity of
one part of the system - what about the source device, which as we
have already seen can be a tuner or a camera with its own multiple
sub-devices.  What if there's a video improvement chip in the chain
as well - how is a "generic" iMX6 plugin supposed to know how to deal
with that?

It seems to me that what's required is not an "iMX6 plugin" but a
separate plugin for each platform - or worse.  Consider boards like
the Raspberry Pi, where users can attach a variety of cameras.  I
don't think this approach scales.  (This is relevant: the iMX6 board
I have here has a RPi compatible connector for a MIPI CSI2 camera.
In fact, the IMX219 module I'm using _is_ a RPi camera, it's the RPi
NoIR Camera V2.)

The iMX6 problem is way larger than just "which subdev do I need to
configure for control X" - if you look at the dot graphs both Steve
and myself have supplied, you'll notice that there are eight (yes,
8) video capture devices.  Let's say that we can solve the subdev
problem in libv4l2.  There's another problem lurking here - libv4l2
is /dev/video* based.  How does it know which /dev/video* device to
open?

We don't open by sensor, we open by /dev/video*.  In my case, there
is only one correct /dev/video* node for the attached sensor, the
other seven are totally irrelevant.  For other situations, there may
be the choice of three functional /dev/video* nodes.

Right now, for my case, there isn't the information exported from the
kernel to know which is the correct one, since that requires knowing
which virtual channel the data is going to be sent over the CSI2
interface.  That information is not present in DT, or anywhere.  It
only comes from system knowledge - in my case, I know that the IMX219
is currently being configured to use virtual channel 0.  SMIA cameras
are also configurable.  Then there's CSI2 cameras that can produce
different formats via different virtual channels (eg, JPEG compressed
image on one channel while streaming a RGB image via the other channel.)

Whether you can use one or three in _this_ scenario depends on the
source format - again, another bit of implementation specific
information that userspace would need to know.  Kernel space should
know that, and it's discoverable by testing which paths accept the
source format - but that doesn't tell you ahead of time which
/dev/video* node to open.

So, the problem space we have here is absolutely huge, and merely
having a plugin that activates when you open a /dev/video* node
really doesn't solve it.

All in all, I really don't think "lets hope someone writes a v4l2
plugin to solve it" is ever going to be successful.  I don't even
see that there will ever be a userspace application that is anything
more than a representation of the dot graphs that users can use to
manually configure the capture system with system knowledge.

I think everyone needs to take a step back and think long and hard
about this from the system usability perspective - I seriously
doubt that we will ever see any kind of solution to this if we
continue to progress with "we'll sort it in userspace some day."

-- 
RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line: currently at 9.6Mbps down 400kbps up
according to speedtest.net.



[Index of Archives]     [Linux Input]     [Video for Linux]     [Gstreamer Embedded]     [Mplayer Users]     [Linux USB Devel]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [Yosemite Backpacking]
  Powered by Linux