Le vendredi 18 mai 2018 à 15:38 +0300, Laurent Pinchart a écrit : > > Before libv4l, media support for a given device were limited to a few > > apps that knew how to decode the format. There were even cases were a > > proprietary app were required, as no open source decoders were available. > > > > From my PoV, the biggest gain with libv4l is that the same group of > > maintainers can ensure that the entire solution (Kernel driver and > > low level userspace support) will provide everything required for an > > open source app to work with it. > > > > I'm not sure how we would keep enforcing it if the pipeline setting > > and control propagation logic for an specific hardware will be > > delegated to PipeWire. It seems easier to keep doing it on a libv4l > > (version 2) and let PipeWire to use it. > > I believe we need to first study pipewire in more details. I have no personal > opinion yet as I haven't had time to investigate it. That being said, I don't > think that libv4l with closed-source plugins would be much better than a > closed-source pipewire plugin. What main concern once we provide a userspace > camera stack API is that vendors might implement that API in a closed-source > component that calls to a kernel driver implementing a custom API, with all > knowledge about the camera located in the closed-source component. I'm not > sure how to prevent that, my best proposal would be to make V4L2 so useful > that vendors wouldn't even think about a different solution (possibly coupled > by the pressure put by platform vendors such as Google who mandate upstream > kernel drivers for Chrome OS, but that's still limited as even when it comes > to Google there's no such pressure on the Android side). If there is proprietary plugins, then I don't think it will make any difference were this is implemented. The difference is the feature set we expose. 3A is per device, but multiple streams, with per request controls is also possible. PipeWire gives central place to manage this, while giving multiple process access to the camera streams. I think in the end, what fits better would be something like or the Android Camera HAL2. But we could encourage OSS by maintaining a base implementation that covers all the V4L2 aspect, leaving only the 3A aspect of the work to be done. Maybe we need to come up with an abstraction that does not prevent multi-streams, but only requires 3A per vendors (saying per vendors, as some of this could be Open Source by third parties). just thinking out loud now ;-P Nicolas p.s. Do we have the Intel / IPU3 folks in in the loop ? This is likely the most pressing HW as it's shipping on many laptops now.
Attachment:
signature.asc
Description: This is a digitally signed message part