Re: [PATCH v7 00/16] Intel IPU3 ImgU patchset

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Laurent, Bingbu

On Mon, Mar 25, 2019 at 06:06:30AM +0200, Laurent Pinchart wrote:
> Hi Bingbu,
>
[snip]

> > >>>>>
> > >>>>> Thank you for the information. This will need to be captured in the
> > >>>>> documentation, along with information related to how each block in the
> > >>>>> hardware pipeline interacts with the image size. It should be possible for
> > >>>>> a developer to compute the output and viewfinder resolutions based on the
> > >>>>> parameters of the image processing algorithms just with the information
> > >>>>> contained in the driver documentation.
> > >
> > > In libcamera development we're now at the point of having to calculate
> > > the sizes to apply to all intermediate pipeline stages based on the
> > > following informations:
> > >
> > > 1) Main output resolution
> > > 2) Secondary output resolution (optional)
> > > 3) Image sensor's available resolutions
> > >
> > > Right now that informations are captured in the xml file you linked
> > > here above, but we need a programmatic way to do the calculation,
> > > without going through an XML file, that refers to two specific sensors
> > > only.
> > >
> > > As Laurent said here, this should come as part of the documentation
> > > for driver users and would unblock libcamera IPU3 support
> > > development.
> > >
> > > Could you provide documentation on how to calculate each
> > > intermediate step resolutions?
> >
> > All the intermediate step resolutions are generated by the specific tool
> > with sensor input and outputs resolutions.
> >
> > The tool try to keep maximum fov and has the knowledge of all the
> > limitations of each intermediate hardware components(mainly BDS and GDC).
>
> That's exactly what we want to do in software in libcamera :-) And
> that's why we need more infirmation about the limitations of each
> intermediate hardware component. Eventually those limitations should be
> documented in the IPU3 driver documentation in the kernel sources, but
> for now we can move forward if they're just communicated by e-mail (if
> time permits we may be able to submit a kernel patch to integrate that
> in the documentation).
>
> > Currently, there is not a very simple calculation to get the
> > intermediate resolutions.
> > Let's take some effort to try find a programmatic way to do calculation
> > instead of the tool.

Thank you for your effort.

> >
> > > [snip]
> > >
> > >>>>>>>>> 3. The ImgU V4L2 subdev composing should be set by using the
> > >>>>>>>>> VIDIOC_SUBDEV_S_SELECTION on pad 0, with V4L2_SEL_TGT_COMPOSE as the
> > >>>>>>>>> target, using the BDS height and width.
> > >>>>>>>>>
> > >>>>>>>>> Once these 2 steps are done, the raw bayer frames can be input to the
> > >>>>>>>>> ImgU V4L2 subdev for processing.
> > >>>>>>>> Do I need to capture from both the output and viewfinder nodes ? How
> > >>>>>>>> are they related to the IF -> BDS -> GDC pipeline, are they both fed
> > >>>>>>>> from the GDC output ? If so, how does the viewfinder scaler fit in that
> > >>>>>>>> picture ?
> > >>>>>> The output capture should be set, the viewfinder can be disabled.
> > >>>>>> The IF and BDS are seen as crop and compose of the imgu input video
> > >>>>>> device. The GDC is seen as the subdev sink pad and OUTPUT/VF are source
> > >>>>>> pads.
> > >
> > > This is another point that we would like to have clarified:
> > > 1) which outputs are mandatory and which one are not
> > > 2) which operations are mandatory on un-used outputs
> > > 3) does the 'ipu_pipe_mode' control impact this
> > >
> > > As you mentioned here, "output" seems to be mandatory, while
> > > "viewfinder" and "stat" are optional. We have tried using the "output"
> > > video node only but the system hangs to an un-recoverable state.
> >
> > Yes, main output is mandatory, 'vf' and 'stat' are optional.
>
> I will let Jacopo confirm this, but unless I'm mistaken, when he tried
> to use the main output only (with the links between the ImgU subdev and
> the vf and stat video nodes disabled), the driver would hang without
> processing any frame. I believe this was a complete system hang,
> requiring a hard reboot to recover.
>

Yes, that's what I have noticed.

On the other hand, if I link, configure, prepare buffers and start the
'vf' and 'stat' nodes, but never queue buffers there, I can capture
from output only.

> > > What I have noticed is instead that the viewfinder and stat nodes
> > > needs to be:
> > > 1) Linked to the respective "ImgU" subdevice pads
> > > 2) Format configured
> > > 3) Memory reserved
> > > 4) video device nodes started
> > >
> > > It it not required to queue/dequeue buffers from viewfinder and stat,
> > > but steps 1-4 have to be performed.
> > >
> > > Can you confirm this is intended?
> >
> > viewfinder and stats are enabled when the link for respective subdev
> > pads enabled, and then driver can use these input conditions to find the
> > binary to run.
> >

As Laurent reported above, if I leave the the 'vf' and 'stat' links
disabled, the system hangs.

> > > Could you please list all the steps that have to be applied to the
> > > ImgU's capture video nodes, and which ones are mandatory and which ones
> > > are optional, for the following use cases:
> > > 1) Main output capture only
> > > 2) Main + secondary output capture
> > > 3) Secondary capture only.
> >
> > I think the 3) is not supported.
> >
> > The steps are:
> > 1). link necessary the respective subdevices
> > input --> imgu -->output
> >             |  -->vf
> >             |  -->3a stats

For which use case, in the above reported list?

 1) Main output capture only
        Does 'vf' and 'stat' links needs to be enabled?

 2) Main + secondary output capture
        Does 'stat' link need to be enabled?

 3) Secondary capture only.
        not supported

> >
> > 2). set all the formats for input, output and intermediate resolutions.
> > 3). start stream
> >
> > The ipu pipe_mode will not impact the whole pipe behavior. It just ask
> > firmware to run different processing to generate same format outputs.
>

I would apreciate to have a better description of the pipe_mode
control, in order to better understand when and if the library has to
modify its value and which mode to use (0=video, 1=still_capture).

Thanks
   j

> --
> Regards,
>
> Laurent Pinchart

Attachment: signature.asc
Description: PGP signature


[Index of Archives]     [Linux Input]     [Video for Linux]     [Gstreamer Embedded]     [Mplayer Users]     [Linux USB Devel]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [Yosemite Backpacking]

  Powered by Linux