Re: RFC: removing various special/obscure features from atomisp code ?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Hans,

On Fri, Feb 17, 2023 at 04:18:55PM +0100, Hans de Goede wrote:
> Hi,
> 
> On 2/16/23 22:34, Laurent Pinchart wrote:
> > Hi Hans,
> > 
> > On Thu, Feb 16, 2023 at 04:47:51PM +0100, Hans de Goede wrote:
> >> On 2/16/23 15:48, Laurent Pinchart wrote:
> >>> On Thu, Feb 16, 2023 at 03:20:08PM +0100, Hans de Goede wrote:
> >>>> Hi All,
> >>>>
> >>>> I have been looking into moving the sensor registration for atomisp2
> >>>> over to v4l2-aysnc similar to how
> >>>> drivers/media/pci/intel/ipu3/cio2-bridge.c does things.
> >>>>
> >>>> Together with some other smaller changes this should allow the atomisp
> >>>> code use standard sensor drivers instead of having their own fork of
> >>>> these drivers.
> >>>>
> >>>> While looking into this I realized that the current architecture of
> >>>> the atomisp2 code where it registers 8 /dev/video# nodes + many
> >>>> v4l2-subdevs is getting in the way of doing this.  At a minimum the
> >>>> current convoluted media-ctl graph makes it harder then necessary to
> >>>> make this change.
> >>>>
> >>>> So this makes me realize that it probably is time to make some changes
> >>>> to the atomisp-code to remove a bunch of somewhat obscure (and
> >>>> untested / unused) features. I have been thinking about removing these
> >>>> for a long time already since they also get in the way of a bunch of
> >>>> other things like allowing the /dev/video# nodes to be opened multiple
> >>>> times.
> >>>>
> >>>> So my plan is to reduce the feature set to make atomisp work as more
> >>>> or less a standard webcam (with front/back sensors) which is how most
> >>>> hw is using it and also is how all our (my) current testing uses it.
> >>>>
> >>>> This means reducing the graph to a single /dev/video0 output node + 2
> >>>> subdevs for the sensors I might put one more node in the graph for
> >>>> selecting between the 3 CSI ports, or those could be 3 possible
> >>>> sources for /dev/video0.
> >>>
> >>> Could you briefly summarize the hardware architecture, and in particular
> >>> what building blocks are present, and how they're connected ? That will
> >>> help with the discussion.
> >>
> >> I can try, but it is complicated. The atomisp appears to mainly be
> >> some coprocessor thing (with I guess some hw-accel blocks on the side)
> >> the way it works from the driver's pov is that the firmware file really
> >> contains a a whole bunch of different binaries to run on the co-processor,
> >> with a table describing the binaries including supported input and
> >> output formats.
> >>
> >> Each binary represents a complete camera pipeline, going from
> >> directly reading from the CSI receiver on one end to DMA-ing
> >> the fully finished ready to consume buffers in the requested
> >> destination fmt on the other end. The driver picks a binary
> >> based on the requested input + output formats and then uploads
> >> + starts that.
> >>
> >> So basically it is one big black box, where we hookup a
> >> sensor on one side and then on the other end say give my YUYV
> >> or YU12, or ...   There are of course a whole bunch of
> >> processing parameters we can set like lens shading correction
> >> tables (format unknown), etc. But basically it is still
> >> just a black box.
> >>
> >> So from a mediactl pov as I see it the whole thing is a single
> >> node in the graph.
> > 
> > Do you mean a single entity for the ISP ? I'd go for
> > 
> > sensor subdev -> CSI-2 RX subdev -> ISP subdev -> video device
> > 
> > Is that what you meant ?
> 
> Yes although I'm not sure having "CSI-2 RX subdev" in there
> as a separate node makes much sense given how blackbox-y
> the entire working of the pipeline is.
> 
> At least I'm not aware of any way to e.g. skip the ISP and
> get raw bayer frames directly out of the CSI-2 receiver.

This should be technically possible, or at least it has been. Not quite
sure about this hardware version or firmware though.

Even then, it's always possible to have one more pad for raw output in the
same sub-device. The ISP won't be usable for memory to memory processing
while capturing raw anyway --- it's not supported by the firmware.

-- 
Regards,

Sakari Ailus



[Index of Archives]     [Linux Input]     [Video for Linux]     [Gstreamer Embedded]     [Mplayer Users]     [Linux USB Devel]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [Yosemite Backpacking]

  Powered by Linux