Hi Lane, On Wednesday 24 November 2010 01:14:13 Lane Brooks wrote: > On 11/23/2010 04:45 PM, Laurent Pinchart wrote: > > On Tuesday 23 November 2010 23:29:10 Lane Brooks wrote:Laurent, > > > >> If the links are setup to the resizer, then it seems that user space > >> applications should be able to talk the resizer output (/dev/video3) > >> like a traditional V4L2 device and need not worry about the new media > >> framework. It even seems possible for the resizer to allow the final > >> link format to be adjusted so that the user space application can > >> actually adjust the resizer subdev output format across the range of > >> valid resizer options based on the format of the resizer input pad. If > >> the resizer output device node worked this way, then our camera would > >> work with all the existing V4L2 applications with the simple caveat that > >> the user has to run a separate setup application first. > >> > >> The resizer output device node does not currently behave this way, and I > >> am not sure why. These are the reasons that I can think of as to why: > >> 1. It has not been implemented this way yet. > >> 2. I am doing something incorrectly with the media-ctl application. > >> 3. It not intended to work this way (by the new media framework design > >> principles). > >> 4. It cannot work this way because of some reason I am not considering. > > > > It's probably a combination of 1 and "it cannot work this way because of > > reasons I can't remember at 1AM" :-) > > > > The ISP video device nodes implementation doesn't initialize vfh->format > > when the device node is opened. I think this should be fixed by querying > > to connected subdevice for its current format. Of course there could be > > no connected subdevice when the video device node is opened, in which > > case the format can't be initialized. Pure V4L2 applications must not > > try to use the video device nodes before the pipeline is initialized. > > I'll look into implementing this. This is mostly what I am looking for > and hopefully won't be too involved to implement. > > > Regarding adjusting the format at the output of the connected subdevice > > when the video device node format is set, that might be possible to > > implement, but we will run into several issues. One of them is that > > applications currently can open the video device nodes, set the format > > and request buffers without influencing the ISP at all. The format set > > on the video device node will be checked against the format on the > > connected pad at streamon time. This allows preallocating buffers for > > snapshot capture to lower snapshot latency. Making set_format configure > > the connected subdev directly would break this > > How does calling set_format on the subdev pad at the same as the device > node prevent preallocating buffers? I don't really understand the ISP > buffering, so I think at this point I will look into implementing the > previous option and then perhaps I will have a better understanding of > the issue you raise here. I think it is only the resizer that would need > this capability. I am bringing it up as a nice to have, but we can > certainly live without it if it does not fit into the design goals of > the framework. The preallocate buffers the driver needs to know the buffer size, and that information is computed using the format set on the video device node. If you want to preallocate two sets of buffers you need to call VIDIOC_S_FMT with different sizes on the file handles before calling VIDIOC_REQBUFS. That's a hack, and that's why VIDIOC_S_FMT on the video device nodes does not configure the connected pad. At some point in the future we will need to brainstorm a buffers management API that will solve this problem in a clean way. -- Regards, Laurent Pinchart -- To unsubscribe from this list: send the line "unsubscribe linux-media" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html