On Fri, 20 Jun 2003 23:58:03 -0700 Joel Rodriguez <joel@xxxxxxxxx> wrote: > Thanks for your attention to the matter Esnst: > > it is enough information to get me going :) > > I will take a close look at the link which by the way looks very impressive, > the approach was thinking was not conjugate gradient itself, but a variant > of constrained optimization: > > http://www-fp.mcs.anl.gov/otc/Guide/SoftwareGuide/Categories/constropt.html > > particularly the least squares solution, for which I'm familiar with: > > http://www.sbsi-sol-optimize.com/products_lssol.htm > > P.S. wont bother for a while, thanks for your attention, yesterday's > tequila, > some times make me feel that P=NP,..he,... :) Oh, but that is a valid feeling even when you're sober, nobody has proved that they are different. It might be wise to wait with reading the rest of this post until you have fully recovered :) But when you have recovered, I would advice you to study Fourier Analysis. I found it very helpful in explaining why deconvolution is so difficult. One important fact is that a convolution can be described as a multiplication in the Fourier domain, i.e. the Fourier transform of the result is equal to the multiplication of the Fourier transform of the input times the Fourier transform of the convolution. Now this implies that the inverse operation (the deconvolution) can be described in the Fourier domain as a division by the Fourier transform of the convolution. But in virtually all cases the Fourier transform of the convolution contains some values that are very small. In the case that your convolution is circular symmetric, it is possible to prove that its Fourier transform must contain at least one value that is equal to zero. It is obvious that when the Fourier transform of the convolution contains any zeroes, that there cannot be an exact inverse because division by zero is undefined. Also the small values in the Fourier transform cause problems: when you divide by a small number that is of course equivalent to multiplying with a big number, in other words the inverse of the convolution will greatly magnify all errors in the image that correspond with the small valued component. This also helps to explain why least square minimizations frequently give horrible results. Take an image for which the Fourier transform only contains values that are significantly different from zero for the components where the Fourier transform is close to zero. It should be clear that the result of convolving this image with the convolution gives a result where every pixel is almost equal to zero. Because convolution is a linear operation, when you add this image A to another image B and then apply the convolution, the end result must be equal to the sum of the convolution of A plus the convolution of B. But because the convolution of A is almost zero, it is easy to see that the convolution of A + B is almost equal to the convolution of B. But this means that the least square criterion is not very well defined, because when some multiple of A is added to an image the least squares distance will only change by a very small amount. In practice this means, that I can have two completely different images, one that shows a "normal" image and another one that completely looks like random noise. But when I use a convolution on both images the result can be almost identical. The problem with least squares optimization is that this procedure cannot distinguish between these two images. This is the reason that least squares optimization procedures do not perform well, often the optimal solution visually completely looks like random noise. Most techniques that are based on least squares minimization are iterative and do not attempt to find the real minimum. Normally they require user intervention to determine the number of iterative steps. Perhaps this explanation is a bit too convoluted, but I think that it contains some important points. Feel free to ask when you have any problems. greetings, Ernst Lippe