I am currently developing a new image v4l2 sensor driver to acquire sequences of still images and wonder how to interface that to the v4l2 API. Currently, cameras are assumed to deliver an endless stream of images after being triggered internally with VIDIOC_STREAMON. If supported by the driver, a certain frame rate is used. For precise image capturing, I need two additional features: Limiting the number of captured images: It is desirable not having to stop streaming from user space for camera latency. A typical application are single shots at random times, and possibly with little time in between the end of one image and start of a new one, so an image that could not be stopped in time would be a problem. A video camera would only support the limit value "unlimited" as possible capturing limit. Scientific cameras may offer more, or possibly only limited capturing. Configuring the capture trigger: Right now sensors are implicitly triggered internally from the driver. Being able to configure external triggers, which many sensors support, is needed to start capturing at exactly the right time. Again, video cameras may only offer "internal" as trigger type. Perhaps v4l2 already offers something that I overlooked. If not, what would be good ways to extend it? Regards, Michael -- To unsubscribe from this list: send the line "unsubscribe linux-media" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html