Re: [RFC, libv4l]: Make libv4l2 usable on devices with complex pipeline

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Jun 6, 2018 at 5:46 PM Pavel Machek <pavel@xxxxxx> wrote:
>
> Hi!
>
> > Thanks for coming up with this proposal. Please see my comments below.
> >
> > > Ok, can I get any comments on this one?
> > > v4l2_open_complex("/file/with/descriptor", 0) can be used to open
> > > whole pipeline at once, and work if it as if it was one device.
> >
> > I'm not convinced if we should really be piggy backing on libv4l, but
> > it's just a matter of where we put the code added by your patch, so
> > let's put that aside.
>
> There was some talk about this before, and libv4l2 is what we came
> with. Only libv4l2 is in position to propagate controls to right
> devices.
>
> > Who would be calling this function?
> >
> > The scenario that I could think of is:
> > - legacy app would call open(/dev/video?), which would be handled by
> > libv4l open hook (v4l2_open()?),
>
> I don't think that kind of legacy apps is in use any more. I'd prefer
> not to deal with them.

In another thread ("[ANN v2] Complex Camera Workshop - Tokyo - Jun,
19"), Mauro has mentioned a number of those:

"open source ones (Camorama, Cheese, Xawtv, Firefox, Chromium, ...) and closed
source ones (Skype, Chrome, ...)"

>
> > - v4l2_open() would check if given /dev/video? figures in its list of
> > complex pipelines, for example by calling v4l2_open_complex() and
> > seeing if it succeeds,
>
> I'd rather not have v4l2_open_complex() called on devices. We could
> test if argument is regular file and then call it... But again, that's
> next step.
>
> > - if it succeeds, the resulting fd would represent the complex
> > pipeline, otherwise it would just open the requested node directly.

What's the answer to my original question of who would be calling
v4l2_open_complex(), then?

>
> > I guess that could give some basic camera functionality on OMAP3-like hardware.
>
> It definitely gives camera functionality on OMAP3. I'm using it to
> take photos with Nokia N900.
>
> > For most of the current generation of imaging subsystems (e.g. Intel
> > IPU3, Rockchip RKISP1) it's not enough. The reason is that there is
> > more to be handled by userspace than just setting controls:
> >  - configuring pixel formats, resolutions, crops, etc. through the
> > whole pipeline - I guess that could be preconfigured per use case
> > inside the configuration file, though,
>
> That may be future plan. Note that these can be preconfigured; unlike
> controls propagation...
>
> >  - forwarding buffers between capture and processing pipelines, i.e.
> > DQBUF raw frame from CSI2 video node and QBUF to ISP video node,
>
> My hardware does not need that, so I could not test it. I'll rely on
> someone with required hardware to provide that.
>
> (And you can take DQBUF and process it with software, at cost of
> slightly higher CPU usage, right?)
>
> >  - handling metadata CAPTURE and OUTPUT buffers controlling the 3A
> > feedback loop - this might be optional if all we need is just ability
> > to capture some frames, but required for getting good quality,
> >  - actually mapping legacy controls into the above metadata,
>
> I'm not sure what 3A is. If you mean hardware histograms and friends,
> yes, it would be nice to support that, but, again, statistics can be
> computed in software.

Auto-exposure, auto-white-balance, auto-focus. In complex camera
subsystems these need to be done in software. On most hardware
platforms, ISP provides necessary input data (statistics) and software
calculates required processing parameters.

>
> > I guess it's just a matter of adding further code to handle those,
> > though. However, it would build up a separate legacy framework that
> > locks us up into the legacy USB camera model, while we should rather
> > be leaning towards a more flexible framework, such as Android Camera
> > HALv3 or Pipewire. On top of such framework, we could just have a very
> > thin layer to emulate the legacy, single video node, camera.
>
> Yes, we'll need something more advanced.
>
> But.. we also need something to run the devices today, so that kernel
> drivers can be tested and do not bitrot. That's why I'm doing this
> work.

I guess the most important bit I missed then is what is the intended
use case for this. It seems to be related to my earlier, unanswered
question about who would be calliing v4l2_open_complex(), though.

What userspace applications would be used for this testing?

Best regards,
Tomasz



[Index of Archives]     [Linux Input]     [Video for Linux]     [Gstreamer Embedded]     [Mplayer Users]     [Linux USB Devel]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [Yosemite Backpacking]

  Powered by Linux