Re: [RFC] Pixel format definition on the "image" bus

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> On Mon, 31 Aug 2009, Hans Verkuil wrote:
>
>>
>> > On Thu, 27 Aug 2009, Hans Verkuil wrote:
>> >
>> >> It's my opinion that we have to be careful in trying to be too
>> >> intelligent. There is simply too much variation in hardware out there
>> to
>> >> ever hope to be able to do that.
>> >
>> > An opinion has been expressed, that my proposed API was too complex,
>> that,
>> > for example, the .packing parameter was not needed. Just to give an
>> > argument, why it is indeed needed, OMAP 3 can pack raw 10, 12, (and
>> 14?)
>> > bit data in two ways in RAM, so, a sensor would use the .packing
>> parameter
>> > to specify how its data has to be arranged in RAM to produce a
>> specific
>> > fourcc code.
>>
>> One thing that I do not understand in your proposal: how would a sensor
>> know how its data is going to be arranged in RAM? It knows nothing about
>> that. It can just transport the image data over the data pins in a
>> certain
>> number of formats, but how those are eventually arranged in RAM is
>> something that only the bridge driver will know.
>>
>> A sensor should tell how its data is transported over the data pins, not
>> what it will look like in RAM.
>
> Yes, in a way. We agree, that we describe the data from the sensor on the
> image bus with a unique ID (data format code enum), right? Now, what does
> this ID tell us? It should tell us what we can get from this format in
> RAM, right?

No, it does not. That is only true for the bridge which is actually doing
the DMA. So calling VIDIOC_S_FMT in the application will indeed request a
specific memory layout of the image. But you cannot attach that same
meaning when configuring a sensor or video decoder/encoder device. That
has no knowledge whatsoever about memory layouts.

In principle sensor devices (to stay with that example) support certain
bus configurations and for each configuration they can transfer an image
in certain formats (the format being the order in which image data is
transported over the data bus). These formats can be completely unique to
that sensor, or (much more likely) be one of a set of fairly common
formats. If unique formats are encountered it is likely to be some sort of
compressed image. Raw images are unlikely to be unique.

I do not believe that you can in general autonegotiate this. But there are
many cases where you can do a decent job of it. To do that the bridge
needs a mapping of memory layouts (the pixelformat specified with S_FMT)
and the image format coming in on the data pins (lets call it datapin
format). Then it can query the sensor which datapin formats it supports
and select the appropriate one.

This approach makes the correct split between memory and datapin formats.
Mixing them is a really bad idea.

Regards,

          Hans

> Since codes are unique, this information should be globally
> available. That's why I'm decoding format codes into (RAM) format
> descriptors centrally in v4l2-imagebus.c. And then hosts can use those
> descriptors to decide which packing to use to obtain the required fourcc
> in RAM.
>
> Thanks
> Guennadi
> ---
> Guennadi Liakhovetski, Ph.D.
> Freelance Open-Source Software Developer
> http://www.open-technology.de/
>


-- 
Hans Verkuil - video4linux developer - sponsored by TANDBERG

--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux Input]     [Video for Linux]     [Gstreamer Embedded]     [Mplayer Users]     [Linux USB Devel]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [Yosemite Backpacking]
  Powered by Linux