Re: [RFC PATCH v5 6/9] media: tegra: Add Tegra210 Video input driver

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 4/1/20 9:58 AM, Laurent Pinchart wrote:
External email: Use caution opening links or attachments


Hi Sowjanya,

On Wed, Apr 01, 2020 at 09:36:03AM -0700, Sowjanya Komatineni wrote:
Hi Sakari/Laurent,

Few questions to confirm my understanding on below discussion.

1. Some sensors that you are referring as don't work with single devnode
controlling pipeline devices are ISP built-in sensors where setup of
pipeline and subdevices happen separately?
Sensors that include ISPs could indeed require to be exposed as multiple
subdevs, but I was mostly referring to raw Bayer sensors with hardware
architectures similar to the SMIA++ and MIPI CCS specifications. Those
sensors can perform cropping in up to three different locations (analog
crop, digital crop, output crop), and can also scale in up to three
different locations (binning, skipping and filter-based scaling).

Furthermore, with the V4L2 support for multiplexed streams that we are
working on, a sensor that can produce both image data and embedded data
would also need to be split in multiple subdevs.

Thanks Laurent.

For sensors with meta/embedded data along with image in same frame, Tegra VI HW extracts based on programmed embedded data size info.

So in our driver we capture this as separate buffer as embedded data is part of frame.

You above comment on multiplexed streams is for sensors using different virutal channels for diff streams?


2. With driver supporting single device node control of entire pipeline
devices compared to MC-based, limitation is with userspace apps for only
these complex camera sensors?
In those cases, several policy decisions on how to configure the sensor
(whether to use binning, skipping and/or filter-based scaling for
instance, or how much cropping and scaling to apply to achieve a certain
output resolution) will need to be implemented in the kernel, and
userspace will not have any control on them.

3. Does all upstream video capture drivers eventually will be moved to
support MC-based?
I think we'll see a decrease of the video-node-centric drivers in the
future for embedded systems, especially the ones that include an ISP.
When a system has an ISP, even if the ISP is implemented as a
memory-to-memory device separate from the CSI-2 capture side, userspace
will likely have a need for fine-grained control of the camera sensor.

4. Based on libcamera doc looks like it will work with both types of
MC-based and single devnode based pipeline setup drivers for normal
sensors and limitation is when we use ISP built-in sensor or ISP HW
block. Is my understanding correct?
libcamera supports both, it doesn't put any restriction in that area.
The pipeline handler (the device-specific code in libcamera that
configures and control the hardware pipeline) is responsible for
interfacing with the kernel drivers, and is free to use an MC-centric or
video-node-centric API depending on what the kernel drivers offer.

The IPA (image processing algorithms) module is also vendor-specific.
Although it will not interface directly with kernel drivers, it will
have requirements on how fine-grained control of the sensor is required.
For systems that have an ISP in the SoC, reaching a high image quality
level requires fine-grained control of the sensor, or at the very least
being able to retrieve fine-grained sensor configuration information
from the kernel. For systems using a camera sensor with an integrated
ISP and a CSI-2 receiver without any further processing on the SoC side,
there will be no such fine-grained control of the sensor by the IPA (and
there could even be no IPA module at all).

--
Regards,

Laurent Pinchart



[Index of Archives]     [Linux Input]     [Video for Linux]     [Gstreamer Embedded]     [Mplayer Users]     [Linux USB Devel]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [Yosemite Backpacking]

  Powered by Linux