3 d images can be made from a 2 d array of cameras. The mathematics is complex. I have been trying early this morning to work it out. The points in the object plane are
O(x,y,z) and each camera has image points (j,k) so if there is an array C(n,m) then the image points will be C(n,m,j,k,l) tracing ray paths. So the image point C(n,m,j,k,l) will receive a contribution SUM O(x,y,z) from each of the object points (x,y,z). where j=x, k=y and l=z for each camera n,m. To recover this information as a 3 D image to be seen the ray paths have to be reconstructed. The virtual object VO consists of virtual point sources of light VO(x,y,z) and the light from this VO is the sum of the light from each image point C(n,m,jMAX-j,kMAX-k,lMAX-l) so VO(x,y,z)=SUM C(n,m,jMAX-j,kMAX-k,lMAX-l) where j=x, k=y, l=z. x,y,x and n,m are indexes but a single point in the object corresponds to a single point in the image plane and for the virtual object. If the image plane is a regular array then the correspondong points on the object and virtual object will be non-linearly related to the image array. I think a japanese scientist has worked it out for his camera that allows you to refocus after taking the image along with adobe and others. I have not seen their analysis. And I doubt that I would understand if if I had. Dr Chris |