I have a problem figuring out the precision of floating point
operations. Basically, DBL_EPSILON seems to give a wrong value.
Consider this small C program which tries to detect the smallest
possible value of a double that still makes a difference in a "1 + x >
1" floating point comparison (some people refer to that value as
"machine epsilon"):
#include <stdio.h>
#include <float.h>
int main(void)
{
double epsilon = 1.0;
while (1.0 + (epsilon / 2.0) > 1.0) {
epsilon /= 2.0;
}
printf("epsilon = %e\n", epsilon);
printf("DBL_EPSILON: %e\n", DBL_EPSILON);
return 0;
}
Compiling and running this on a variety of GCC versions (ranging from
4.3.x to 4.5.x versions), on a variety of systems (32bit, 64bit,
multilib) and with -m64 with or without optimization, or -m32 *with*
optimization results in the following output:
epsilon = 2.220446e-16
DBL_EPSILON: 2.220446e-16
The detected value and the value provided by DBL_EPSILON match.
However, compiling with -m32 and *without* optimization (-O0) always
results in:
epsilon = 1.084202e-19
DBL_EPSILON: 2.220446e-16
The values don't match and DBL_EPSILON gives a much bigger value then
the detected one. Why is that? It would seem that compiling on 32bit
with -O0 yields higher precision. Is this a result of the FPU being
used (or not used)?