Re: [RFC] Pixel format definition on the "image" bus

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, 31 Aug 2009, Hans Verkuil wrote:

> > Yes, in a way. We agree, that we describe the data from the sensor on the
> > image bus with a unique ID (data format code enum), right? Now, what does
> > this ID tell us? It should tell us what we can get from this format in
> > RAM, right?
> 
> No, it does not. That is only true for the bridge which is actually doing
> the DMA. So calling VIDIOC_S_FMT in the application will indeed request a
> specific memory layout of the image. But you cannot attach that same
> meaning when configuring a sensor or video decoder/encoder device. That
> has no knowledge whatsoever about memory layouts.
> 
> In principle sensor devices (to stay with that example) support certain
> bus configurations and for each configuration they can transfer an image
> in certain formats (the format being the order in which image data is
> transported over the data bus). These formats can be completely unique to
> that sensor, or (much more likely) be one of a set of fairly common
> formats. If unique formats are encountered it is likely to be some sort of
> compressed image. Raw images are unlikely to be unique.
> 
> I do not believe that you can in general autonegotiate this. But there are
> many cases where you can do a decent job of it. To do that the bridge
> needs a mapping of memory layouts (the pixelformat specified with S_FMT)
> and the image format coming in on the data pins (lets call it datapin
> format). Then it can query the sensor which datapin formats it supports
> and select the appropriate one.
> 
> This approach makes the correct split between memory and datapin formats.
> Mixing them is a really bad idea.

I think you just explained in other words exactly what I was trying to 
say. By "It should tell us what we can get from this format in RAM" I 
meant exactly the mapping that you describe above. And further

> > Since codes are unique, this information should be globally
> > available. That's why I'm decoding format codes into (RAM) format
> > descriptors centrally in v4l2-imagebus.c. And then hosts can use those
> > descriptors to decide which packing to use to obtain the required fourcc
> > in RAM.

What I call "decoding" is what you call "mapping."

Thanks
Guennadi
---
Guennadi Liakhovetski, Ph.D.
Freelance Open-Source Software Developer
http://www.open-technology.de/
--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux Input]     [Video for Linux]     [Gstreamer Embedded]     [Mplayer Users]     [Linux USB Devel]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [Yosemite Backpacking]
  Powered by Linux