Re: [RFC] Pixel format definition on the "image" bus

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> On Thu, 27 Aug 2009, Hans Verkuil wrote:
>
>> > Unfortunately, even the current soc-camera approach with its
>> > format-enumeration and -conversion API is not enough. As I explained
>> > above, there are two ways you can handle source's data: "cooked" and
>> > "raw." The "cooked" way is simple - the sink knows exactly this
>> specific
>> > format and knows how to deal with it. Every sink has a final number of
>> > such natively supported formats, so, that's just a switch-case
>> statement
>> > in each sink driver, that is specific to each sink hardware, and that
>> you
>> > cannot avoid.
>> >
>> > It's the "raw" or "pass-through" mode that is difficult. It is used,
>> when
>> > the sink does not have any specific knowledge about this format, but
>> can
>> > pack data into RAM in some way, or, hopefully, in a number of ways,
>> among
>> > which we can choose. The source "knows" what data it is delivering,
>> and,
>> > in principle, how this data has to be packed in RAM to provide some
>> > meaningful user format. Now, we have to pass this information on to
>> the
>> > sink driver to tell it "if you configure the source to deliver the raw
>> > format X, and then configure your bus in a way Y and pack the data
>> into
>> > RAM in a way Z, you get as RAM user format W." So, my proposal is -
>> during
>> > probing, the sink enumerates all raw formats, provided by the source,
>> > accepts those formats, that it can process natively ("cooked" mode),
>> and
>> > verifies if it can be configured to bus configuration Y and can
>> perform
>> > packing Z, if so, it adds format W to the list of supported formats.
>> Do
>> > you see an easier way to do this? I'm currently trying to port one
>> driver
>> > combination to this scheme, I'll post a patch, hopefully, later today.
>>
>> I'm not so keen on attempting to negotiate things that probably are
>> impossible to negotiate anyway. (You may have noticed that before :-) )
>
> I bought your argument about subtle image corruption that might be
> difficult to debug back to a wrongly chosen signal polarity and / or
> sensing edge. Now, what's your argument for this one apart from being "not
> so keen?" Being not keen doesn't seem a sufficient argument to me for
> turning platform data into trash-bins.
>
> Example: currently a combination SuperH CEU platform with a OV772x camera
> sensor connected can provide 11 output formats. There are at least two
> such boards currently in the mainline with the same bus configuration. Do
> you want to exactly reproduce these 11 entries in these two boards? What
> about other boards?
>
>> One approach would be to make this mapping part of the platform data
>> that
>> is passed to the bridge driver.
>>
>> For a 'normal' PCI or USB driver information like this would be
>> contained
>> in the bridge driver. Here you have a generic bridge driver intended to
>> work with different SoCs, so now that information has to move to the
>> platform data. That's the only place where you know exactly how to setup
>> these things.
>>
>> So you would end up with a list of config items:
>>
>> <user fourcc>, <bridge fourcc>, <sensor fourcc>, <bus config>
>>
>> And the platform data of each sensor device would have such a list.
>>
>> So the bridge driver knows that VIDIOC_ENUMFMT can give <user fourcc>
>> back
>> to the user, and if the user selects that, then it has to setup the
>> bridge
>> using <bridge fourcc> and the sensor using <sensor fourcc>, and the bus
>> as
>> <bus config>.
>>
>> This is just a high level view as I don't have time to go into this in
>> detail, but I think this is a reasonable approach. It's really no
>> different to what the PCI and USB drivers are doing, except formalized
>> for
>> the generic case.
>
> Please, give me a valid reason, why this cannot be auto-enumerated.

For example a sensor connected to an fpga (doing e.g. color space
conversion or some other type of image processing/image improvement) which
in turn is connected to the bridge.

How you setup the sensor and how you setup the bridge might not have an
obvious 1-to-1 mapping. While I have not seen setups like this for
sensors, I have seen them for video encoder devices.

You assume that a sensor is connected directly to a bridge, but that
assumption is simply not true. There may be all sorts of ICs in between.

One alternative is to have two approaches: a simple one where you just try
to match what the sensor can do and what the bridge can accept, and one
where you can override it from the platform data.

The latter does not actually have to be implemented as long as there are
no boards that need that, but it should be designed in such a way that it
is easy to implement it later.

It's my opinion that we have to be careful in trying to be too
intelligent. There is simply too much variation in hardware out there to
ever hope to be able to do that.

Regards,

          Hans

-- 
Hans Verkuil - video4linux developer - sponsored by TANDBERG

--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux Input]     [Video for Linux]     [Gstreamer Embedded]     [Mplayer Users]     [Linux USB Devel]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [Yosemite Backpacking]
  Powered by Linux