Hi Kieran, On Fri, Nov 12, 2021 at 05:51:57PM +0000, Kieran Bingham wrote: > Quoting Laurent Pinchart (2021-11-12 10:46:56) > > On Fri, Nov 12, 2021 at 10:32:31AM +0000, Dave Stevenson wrote: > > > On Thu, 11 Nov 2021 at 22:04, Laurent Pinchart wrote: > > > > On Thu, Nov 11, 2021 at 07:30:39PM +0000, Dave Stevenson wrote: > > <big snip> > > > > Refcount the users. Opening the subdev counts as one, and streaming > > > counts as one. You can now hold the power on if you wish to do so. > > > > > > It's the "let userspace worry about it" that worries me. The same > > > approach was taken with MC, and it was a pain in the neck for users > > > until libcamera comes along a decade later. > > > IMHO V4L2 as an API should be fit for purpose and usable with or > > > without libcamera. > > > > It really depends on the type of device I'm afraid :-) If you want to > > capture processed image with a raw bayer sensor on RPi, you need to > > control the ISP, and the 3A algorithms need to run in userspace. For > > other types of devices, going straight to the kernel API is easier (and > > can sometimes be preferred). > > > > At the end of the day, I don't think it makes much of a difference > > though. Once the libcamera API stabilizes, the library gets packaged by > > distributions and applications start using it (or possibly even through > > pipewire), nobody will complain about MC anymore :-) The important part, > > I don't really want to pull this thread further away from $SUBJECT .. but: > > Unfortunately, I don't think that's true. > > We've still got a long way to go! > > libcamera isn't enough to cover all MC use cases. The RPi for instance > has the ability to capture HDMI in through the CSI2 receiver with a > TC358743 or such. This won't need an IPA or 3a, but might want to go > through the ISP for scaling or format conversions... I was indeed mostly thinking about the camera use cases, as we were discussing lens control. There's certainly more than that, with a need to at least configure the unicam MC pipeline to capture from, for instance, an HDMI-to-CSI-2 converter. This isn't something that libcamera was designed for, and I don't know whether the use case could be retrofitted, or if a different userspace framework would be better. > Some time ago, I started to explore how we could handle 'easily' > capturing non-camera devices. But it was not in scope for libcamera. > > > in my opinion, is to handle the complexity somewhere in a framework so > > that applications don't have to do so manually. > > Yes, the complexity needs to be handled somewhere. Applications > should be able to work with a generic interface and get their video > frames. But right now - I don't think applications have this, and key > areas needed for supporting that are not under development or even > consideration yet as far as I can tell. -- Regards, Laurent Pinchart