Hi! > > If so then the Video4Linux2 API is still the best way to implement the > > protocol to pass data between the user-space programs and the driver. > > The V4L2 API doesn't says that the camera must be far away from the > > processor, it can work for USB webcams, for Firewire video camera, PCI > > TV tuners, and probably also for your device. Using the V4L2 not only > > has the advantage of being a well tested API for communicating video > > related information with the user-space but it also buys you the fact > > that any program available on Linux for video should be able to directly > > detect and use the captor! > > Sorry, but no. The camera has been designed to be used in industrial > control applications, such as quality assurance. Think of an automated > inspection of a certain product, where the inspection is integrated > into the production process. Faulty products are sorted out. For this to > work it is absolutely necessary to get the maximum speed (image frames > per second) out of the hardware, so image acquisition and processing > must be carried out in parallel. The way to achieve this is have the > driver manage a queue of image buffers to fill, so it will continue > grabbing images even if no read operation is currently pending. Also, > the ability to attach user-specific context information to every buffer > is essential. Well, I guess v4l api will need to be improved, then. That is still not a reason to introduce completely new api... -- Thanks for all the (sleeping) penguins.