Re: [PATCH v4 0/4] media: camss: sm8250: Virtual channels support for SM8250

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Quoting Bryan O'Donoghue (2022-10-17 15:22:10)
> On 17/10/2022 14:06, Kieran Bingham wrote:
> > Quoting Bryan O'Donoghue (2022-10-17 01:16:05)
> >> On 13/10/2022 13:12, quic_mmitkov@xxxxxxxxxxx wrote:
> >>> From: Milen Mitkov <quic_mmitkov@xxxxxxxxxxx>
> >>>
> >>> For v4:
> >>> - fixes the warning reported by the kernel test robot
> >>> - tiny code change to enable the vc functionality with the partially-applied
> >>>     multistream patches on linux-next (tested on tag:next-20221010)
> >>>
> >>> For v3:
> >>> - setting the sink pad format on the CSID entity will now propagate the
> >>>     format to the source pads to keep the subdev in a valid internal state.
> >>> - code syntax improvements
> >>>
> >>> For v2:
> >>> - code syntax improvements
> >>> - The info print for the enabled VCs was demoted to a dbg print. Can be
> >>>     enabled with dynamic debug, e.g.:
> >>> echo "file drivers/media/platform/qcom/camss/* +p" > /sys/kernel/debug/dynamic_debug/control
> >>>
> >>> NOTE: These changes depend on the multistream series, that as of yet
> >>> is still not merged upstream. However, part of the
> >>> multistream patches are merged in linux-next (tested on
> >>> tag:next-20221010), including the patch that introduces the
> >>> video_device_pipeline_alloc_start() functionality. This allows
> >>> applying and using this series on linux-next without applying the
> >>> complete multistream set.
> >>>
> >>> The CSID hardware on SM8250 can demux the input data stream into
> >>> maximum of 4 multiple streams depending on virtual channel (vc)
> >>> or data type (dt) configuration.
> >>>
> >>> Situations in which demuxing is useful:
> >>> - HDR sensors that produce a 2-frame HDR output, e.g. a light and a dark frame
> >>>     (the setup we used for testing, with the imx412 sensor),
> >>>     or a 3-frame HDR output - light, medium-lit, dark frame.
> >>> - sensors with additional metadata that is streamed over a different
> >>>     virtual channel/datatype.
> >>> - sensors that produce frames with multiple resolutions in the same pixel
> >>>     data stream
> >>>
> >>> With these changes, the CSID entity has, as it did previously, a single
> >>> sink port (0), and always exposes 4 source ports (1, 2,3, 4). The
> >>> virtual channel configuration is determined by which of the source ports
> >>> are linked to an output VFE line. For example, the link below will
> >>> configure the CSID driver to enable vc 0 and vc 1:
> >>>
> >>> media-ctl -l '"msm_csid0":1->"msm_vfe0_rdi0":0[1]'
> >>> media-ctl -l '"msm_csid0":2->"msm_vfe0_rdi1":0[1]'
> >>>
> >>> which will be demuxed and propagated into /dev/video0
> >>> and /dev/video1 respectively. With this, the userspace can use
> >>> any normal V4L2 client app to start/stop/queue/dequeue from these
> >>> video nodes. Tested with the yavta app.
> >>>
> >>> The format of each RDI channel of the used VFE(s) (msm_vfe0_rdi0,
> >>> msm_vfe0_rdi1,...) must also be configured explicitly.
> >>>
> >>> Note that in order to keep a valid internal subdevice state,
> >>> setting the sink pad format of the CSID subdevice will propagate
> >>> this format to the source pads. However, since the CSID hardware
> >>> can demux the input stream into several streams each of which can
> >>> be a different format, in that case each source pad's
> >>> format must be set individually, e.g.:
> >>>
> >>> media-ctl -V '"msm_csid0":1[fmt:SRGGB10/3840x2160]'
> >>> media-ctl -V '"msm_csid0":2[fmt:SRGGB10/960x540]'
> >>>
> >>> Milen Mitkov (4):
> >>>     media: camss: sm8250: Virtual channels for CSID
> >>>     media: camss: vfe: Reserve VFE lines on stream start and link to CSID
> >>>     media: camss: vfe-480: Multiple outputs support for SM8250
> >>>     media: camss: sm8250: Pipeline starting and stopping for multiple
> >>>       virtual channels
> >>>
> >>>    .../platform/qcom/camss/camss-csid-gen2.c     | 54 ++++++++++------
> >>>    .../media/platform/qcom/camss/camss-csid.c    | 44 +++++++++----
> >>>    .../media/platform/qcom/camss/camss-csid.h    | 11 +++-
> >>>    .../media/platform/qcom/camss/camss-vfe-480.c | 61 ++++++++++++-------
> >>>    drivers/media/platform/qcom/camss/camss-vfe.c |  7 +++
> >>>    .../media/platform/qcom/camss/camss-video.c   | 21 ++++++-
> >>>    drivers/media/platform/qcom/camss/camss.c     |  2 +-
> >>>    7 files changed, 140 insertions(+), 60 deletions(-)
> >>>
> >>
> >> Hi Milen
> >>
> >> The set applies to next-20221013 including patch#4.
> >>
> >> I can confirm it doesn't break anything for me - though my sensor is a
> >> bog-standard imx577 so it doesn't have any VC support.
> > 
> > Interesting though - the IMX477 has the ability to convey metadata on a
> > separate VC... That's actually the thing holding back the RPi IMX477
> > driver from mainline, as the way it was anticipated to support multiple
> > data streams is with the new multiplexed streams API.
> > 
> > And I think we inferred that the IMX577 and IMX477 are closely related,
> > so should have similar capabilities for obtaining metadata channels?
> 
> Hmm I was not aware of that.
> 
> If we could import the rpi/imx477.c code into upstrea/imx412.c it might 
> be possible
> 
> The core init is very similar
> 
> https://github.com/raspberrypi/linux/blob/rpi-5.19.y/drivers/media/i2c/imx477.c#L167
> 
> https://github.com/raspberrypi/linux/blob/rpi-5.19.y/drivers/media/i2c/imx412.c#L160
> 
> Maybe it would be possible to apply the rest of the imx477 config on-top 
> as a POC
> 
> https://github.com/raspberrypi/linux/blob/rpi-5.19.y/drivers/media/i2c/imx477.c#L479
> 
> The similary is born out by the shared init code I can see in Leopard 
> imaging's driver, I'm not sure if it supports virtual-channels - I'll 
> have a look, though.
> 
> What's in the imx477 meta-data ?

The exact exposure of the captured frame, exact gain, and frame length,
and even the temperature of the sensor at the time of capture (not sure
at /which/ time if this is a long exposure).


https://git.libcamera.org/libcamera/libcamera.git/tree/src/ipa/raspberrypi/cam_helper_imx477.cpp#n168

"""
void CamHelperImx477::populateMetadata(const MdParser::RegisterMap &registers,
				       Metadata &metadata) const
{
	DeviceStatus deviceStatus;

	deviceStatus.shutterSpeed = exposure(registers.at(expHiReg) * 256 + registers.at(expLoReg));
	deviceStatus.analogueGain = gain(registers.at(gainHiReg) * 256 + registers.at(gainLoReg));
	deviceStatus.frameLength = registers.at(frameLengthHiReg) * 256 + registers.at(frameLengthLoReg);
	deviceStatus.sensorTemperature = std::clamp<int8_t>(registers.at(temperatureReg), -20, 80);

	metadata.set("device.status", deviceStatus);
}

"""

Having the embedded metadata from the sensor helps to ensure accurate
handling in the control loops, so I believe we would always prefer to
reference this when available, rather than what we 'think' we have
programmed. (Which due to timing, or any other error - might not be as
accurate as what the metadata will report).
--
Kieran


> 
> @Milen if you have the imx577 datasheet - I don't - perhaps we could 
> cherry-pick some of the code from imx477 and get the imx412.c->imx577 
> dumping VC data out with the RB5 camera mezzanine.
> 
> ---
> bod




[Index of Archives]     [Linux Input]     [Video for Linux]     [Gstreamer Embedded]     [Mplayer Users]     [Linux USB Devel]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [Yosemite Backpacking]

  Powered by Linux