Hello,
On Tue, 23 Nov 2010, Nikos Chantziaras wrote:
I have a problem figuring out the precision of floating point operations.
Basically, DBL_EPSILON seems to give a wrong value.
Consider this small C program which tries to detect the smallest possible
value of a double that still makes a difference in a "1 + x > 1" floating
point comparison (some people refer to that value as "machine epsilon"):
[snip]
Compiling and running this on a variety of GCC versions (ranging from 4.3.x
to 4.5.x versions), on a variety of systems (32bit, 64bit, multilib) and with
-m64 with or without optimization, or -m32 *with* optimization results in the
following output:
epsilon = 2.220446e-16
DBL_EPSILON: 2.220446e-16
The detected value and the value provided by DBL_EPSILON match.
However, compiling with -m32 and *without* optimization (-O0) always results
in:
epsilon = 1.084202e-19
DBL_EPSILON: 2.220446e-16
The values don't match and DBL_EPSILON gives a much bigger value then the
detected one. Why is that? It would seem that compiling on 32bit with -O0
yields higher precision. Is this a result of the FPU being used (or not
used)?
Yes. You can compare the value you found to LDBL_EPSILON. You can also
try to play with -mfpmath or -mpc64.
--
Marc Glisse