Greetings; I am setting up a machine vision system, where the camera output is cropped to the point of a pixel in the camera is 20+ on the computer screen since we want .001" or .01mm accuracy in the final image presented to the machine operator. Imagine if you will, a 2304xwhatever camera that we are using just the central 300 or so pixels in our application. So we use a fairly high res camera, and crop the image down to the central portion of interest as the first step in making our processing chain faster by having less data to fiddle with. It makes sense that we should, for speed reasons because our video processing chain is slow in frames per second, in some modes its even frames per minute, we should waste as little time as possible in the camera by running it in its native resolution with no compression. Ideally, we should see it in real time, but when you add crosshair targets and such to the stream, real realtime isn't going to happen. But moving the machine 2 thousandths of an inch, and waiting 3 seconds or more to actually see the movement on screen is, shall we say, frustrating to the operator. So, in the interests of maintaining a square pixel, and a 1/1 pixel to pixel ratio so as not to throw away usable resolution by blending or interpolation effects in the camera, is there a returned value in the lsusb -v or -vv output that identifies the camera imagers native resolution and output byte color sequence format? Thank you. Cheers, Gene -- "There are four boxes to be used in defense of liberty: soap, ballot, jury, and ammo. Please use in that order." -Ed Howdershelt (Author) I've always made it a solemn practice to never drink anything stronger than tequila before breakfast. -- R. Nesson A pen in the hand of this president is far more dangerous than 200 million guns in the hands of law-abiding citizens. -- To unsubscribe from this list: send the line "unsubscribe linux-usb" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html