On 20/12/2011 13:01, Vincent Lefevre wrote:
On 2011-12-20 12:48:53 +0100, Dario Saccavino wrote:
In the second program, if 0.5< f< 1 the values of a and b eventually
become the smallest representable denormal value and never change
afterwards, resulting in a large number of operations involving
denormal numbers.
Yes, I agree (I forgot about that)... except that if f is close enough
to 1, you won't have subnormals and the program will be fast (like in
the case f<= 0.5).
gcc enables FTZ when using SSE and ffast-math (I think the specific
compiler flag is -funsafe-math-optimizations).
Thanks, good to know...
Therefore the flags needed are -msse2 -mfpmath=sse -ffast-math
I would discourage the use of -ffast-math, which can affect generic
code very badly (due to -funsafe-math-optimizations). Isn't there
an option to enable FTZ?
There are times when you want IEEE-accurate floating point, because you
are pushing the limits of accuracy, or you want high repeatability. But
normally when you use floating point, you are accepting a certain amount
of inaccuracy. If your code is going to produce incorrect answers
because of the way the compiler/cpu rounds calculations, orders your
sums, or treats denormals and other specials, then I would argue that
your code is probably wrong - perhaps you should be using decimal types,
long doubles, __float128, or some sort of multi-precision maths library
instead.