Hi! > Thanks for coming up with this proposal. Please see my comments below. > > > Ok, can I get any comments on this one? > > v4l2_open_complex("/file/with/descriptor", 0) can be used to open > > whole pipeline at once, and work if it as if it was one device. > > I'm not convinced if we should really be piggy backing on libv4l, but > it's just a matter of where we put the code added by your patch, so > let's put that aside. There was some talk about this before, and libv4l2 is what we came with. Only libv4l2 is in position to propagate controls to right devices. > Who would be calling this function? > > The scenario that I could think of is: > - legacy app would call open(/dev/video?), which would be handled by > libv4l open hook (v4l2_open()?), I don't think that kind of legacy apps is in use any more. I'd prefer not to deal with them. > - v4l2_open() would check if given /dev/video? figures in its list of > complex pipelines, for example by calling v4l2_open_complex() and > seeing if it succeeds, I'd rather not have v4l2_open_complex() called on devices. We could test if argument is regular file and then call it... But again, that's next step. > - if it succeeds, the resulting fd would represent the complex > pipeline, otherwise it would just open the requested node directly. > I guess that could give some basic camera functionality on OMAP3-like hardware. It definitely gives camera functionality on OMAP3. I'm using it to take photos with Nokia N900. > For most of the current generation of imaging subsystems (e.g. Intel > IPU3, Rockchip RKISP1) it's not enough. The reason is that there is > more to be handled by userspace than just setting controls: > - configuring pixel formats, resolutions, crops, etc. through the > whole pipeline - I guess that could be preconfigured per use case > inside the configuration file, though, That may be future plan. Note that these can be preconfigured; unlike controls propagation... > - forwarding buffers between capture and processing pipelines, i.e. > DQBUF raw frame from CSI2 video node and QBUF to ISP video node, My hardware does not need that, so I could not test it. I'll rely on someone with required hardware to provide that. (And you can take DQBUF and process it with software, at cost of slightly higher CPU usage, right?) > - handling metadata CAPTURE and OUTPUT buffers controlling the 3A > feedback loop - this might be optional if all we need is just ability > to capture some frames, but required for getting good quality, > - actually mapping legacy controls into the above metadata, I'm not sure what 3A is. If you mean hardware histograms and friends, yes, it would be nice to support that, but, again, statistics can be computed in software. > I guess it's just a matter of adding further code to handle those, > though. However, it would build up a separate legacy framework that > locks us up into the legacy USB camera model, while we should rather > be leaning towards a more flexible framework, such as Android Camera > HALv3 or Pipewire. On top of such framework, we could just have a very > thin layer to emulate the legacy, single video node, camera. Yes, we'll need something more advanced. But.. we also need something to run the devices today, so that kernel drivers can be tested and do not bitrot. That's why I'm doing this work. And I believe we should work in steps before getting there... controls propagation can not be done from external application, so I'm starting with it. > Some minor comments for the code follow. Ok, let me send this, then go through the comments. Best regards, Pavel -- (english) http://www.livejournal.com/~pavelmachek (cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html
Attachment:
signature.asc
Description: Digital signature