Apart from debating which technique is "better," I'd like to know the way the light-field camera actually works.
Does it, for instance, actually take many photos and store them in a single file? -- or does it take two images ala 3D technology and then mathematically combine the two images on the fly according to instructions from a viewer? -- or maybe generate a hologram using some reference beam that's generated internally?
It's fascinating to imagine, speaking as one of those people enjoys that kind of stuff. . . .
-yoram
The ability to take pictures in lower light with the lens wide open and with less noise. But cost wise it is much cheaper to put a DSLR camera on a tripod for still subjects and take multiple shots at different points of focus and use one of the programs for combing the images into one. Roy In a message dated 6/22/2011 5:21:39 P.M. Eastern Daylight Time, andpph@xxxxxxx writes: OTH it seems to me that maybe a faster way to reach the goal is to make a photo with a small aperture and where everything is sharp and then defocusing those areas progressively away from that plane - this could be done by postprocessing I suspect. Use a pinhole to get DOF, sharpen that image (for pseudo detail) and then defocus at will. What is wrong with this picture? ;)
|