On 11/23/2010 04:23 PM, Vincent Lefevre wrote:
On 2010-11-23 08:25:24 +0200, Nikos Chantziaras wrote:
I have a problem figuring out the precision of floating point operations.
Basically, DBL_EPSILON seems to give a wrong value.
[...]
double epsilon = 1.0;
while (1.0 + (epsilon / 2.0)> 1.0) {
epsilon /= 2.0;
}
printf("epsilon = %e\n", epsilon);
printf("DBL_EPSILON: %e\n", DBL_EPSILON);
[...]
The values don't match and DBL_EPSILON gives a much bigger value then the
detected one. Why is that? It would seem that compiling on 32bit with -O0
yields higher precision. Is this a result of the FPU being used (or not
used)?
The extended precision is provided by the processor in 32-bit mode
(FPU instead of SSE).
Is the FPU hardware still available in x86-64 CPUs (so -mfpmath=387
would make use of it) or is it emulated? On a similar note, is doing
floating point computations using SSE faster when compared to the 387?
It's not that I don't want the excess precision (looks like a Good Thing
to me; less rounding errors). It's that I'm not sure if I can count on
it being there. Would the above routine be a good enough check to see
whether excess precision is there?