Re: [Gimp-developer] Re: Blur filter

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Ernst

Yes, did studied Fourier analysis, do agree with you in almost every thing you 
wrote, there are mainly four concepts involved into the discussion, Fourier Analysis, signal, 
noise and some sort of an inversion technique, under this point of view everything 
sounds  just right, but, the question is then , Fourier analysis accounts for the ``behavior'' 
of  the  mapping among both domains?.


Fourier analysis, can not tell the difference between random noise and some deterministic 
behavior, namely, signals that can be characterized with a fractal number.
not being able to distinguish or tell the difference, means that under particular
circumstances Fourier analysis overlooks.
  

>In practice this means, that I can have two completely different
>images, one that shows a "normal" image and another one that
>completely looks like random noise. But when I use a convolution
>on both images the result can be almost identical. The problem with 
>least squares optimization is that this procedure cannot
>distinguish between these two images.

I almost agree with you, in the sense that the forward modeling Fourier analysis based 
technique might not be sufficient, although the least squares method (as a hole) might be.

Might be if there is a way to incorporate such image information that characterizes it, 
in some other way or aside Fourier analysis, such information (invariant measures) could 
be incorporated into a least squares reconstruction task as regularization scheme technique.  

although the above sentence it is not a fact to my knowledge.

>Most techniques that are based on
>least squares minimization are iterative and do not attempt to
>find the real minimum. Normally they require user intervention
>to determine the number of iterative steps.

although the obtained picture might not represent the original (real minimum), can not 
stop thinking in processing pictures at a 30 FPS rate. I'm confident that as technology
advances and computational power improves, an real-time GPL ``refocus'' is on the horizon, before
the technique could be applied to see saturn with a telescope under turbulent seeing.

http://www.djcash.demon.co.uk/astro/webcam/saturn.htm

this might sound like is getting a little off topic, in the list interests,
would like to thank your insightful comments on the subject, which will for 
sure inspire me in continuing with my research work and the implementation of it.

regards

Joel Rodr'iguez
:)


Ernst Lippe wrote:
On Fri, 20 Jun 2003 23:58:03 -0700
Joel Rodriguez <joel@xxxxxxxxx> wrote:

  
Thanks for your attention to the matter Esnst:

it is enough information to get me going   :)

I will take a close look at the link which by the way looks very impressive,
the approach was thinking was not conjugate gradient itself, but a variant
of constrained optimization:

http://www-fp.mcs.anl.gov/otc/Guide/SoftwareGuide/Categories/constropt.html

particularly the least squares solution, for which I'm familiar with:

http://www.sbsi-sol-optimize.com/products_lssol.htm

    

  
P.S. wont bother for a while, thanks for your attention,  yesterday's 
tequila,
some times make me feel that P=NP,..he,...  :)
    

Oh, but that is a valid feeling even when you're sober, nobody has
proved that they are different.

It might be wise to wait with reading the rest of this post until
you have fully recovered :)

But when you have recovered, I would advice you to study Fourier
Analysis. I found it very helpful in explaining why deconvolution is
so difficult. One important fact is that a convolution can be
described as a multiplication in the Fourier domain, i.e.  the Fourier
transform of the result is equal to the multiplication of the Fourier
transform of the input times the Fourier transform of the
convolution. Now this implies that the inverse operation (the
deconvolution) can be described in the Fourier domain as a division by
the Fourier transform of the convolution. But in virtually all cases
the Fourier transform of the convolution contains some values that are
very small. In the case that your convolution is circular symmetric,
it is possible to prove that its Fourier transform must contain at
least one value that is equal to zero. It is obvious that when the
Fourier transform of the convolution contains any zeroes, that there
cannot be an exact inverse because division by zero is undefined.
Also the small values in the Fourier transform cause problems:
when you divide by a small number that is of course equivalent
to multiplying with a big number, in other words the inverse of
the convolution will greatly magnify all errors in the image
that correspond with the small valued component. 

This also helps to explain why least square minimizations frequently
give horrible results. Take an image for which the Fourier transform
only contains values that are significantly different from zero for
the components where the Fourier transform is close to zero.
It should be clear that the result of convolving this image with
the convolution gives a result where every pixel is almost equal
to zero. Because convolution is a linear operation, when you
add this image A to another image B and then apply the convolution,
the end result must be equal to the sum of the convolution of
A plus the convolution of B. But because the convolution of
A is almost zero, it is easy to see that the convolution of
A + B is almost equal to the convolution of B. But this
means that the least square criterion is not very well defined,
because when some multiple of A is added to an image the
least squares distance will only change by a very small amount.

In practice this means, that I can have two completely different
images, one that shows a "normal" image and another one that
completely looks like random noise. But when I use a convolution
on both images the result can be almost identical. The problem
with least squares optimization is that this procedure cannot
distinguish between these two images. 
This is the reason that least squares optimization procedures
do not perform well, often the optimal solution visually completely
looks like random noise. Most techniques that are based on
least squares minimization are iterative and do not attempt to
find the real minimum. Normally they require user intervention
to determine the number of iterative steps.

Perhaps this explanation is a bit too convoluted, but I think
that it contains some important points. Feel free to ask
when you have any problems.

greetings,

Ernst Lippe


 


  


[Index of Archives]     [Video For Linux]     [Photo]     [Yosemite News]     [gtk]     [GIMP for Windows]     [KDE]     [GEGL]     [Gimp's Home]     [Gimp on GUI]     [Gimp on Windows]     [Steve's Art]

  Powered by Linux