Long double problem and -funsafe-math-optimizations

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

I am working on a cross-platform C++ project that, to keep it short,
does a lot of math calculations.  One of our functions requires better
precision than double, so we're using a long double.  This has worked
great for us on all of our platforms except IRIX.  On IRIX, one of our
test cases results in an intermediate value that must be malformed.
The result of this is that, when this intermediate value is printed,
it is printed correctly.  However, when we do any sort of math on the
value, the result is unreliable.  For example

printf("%.50Le", value) would result in:
9.99999999999999999999999999997239000000000000000000e-01
(which is correct)
printf("%.50Le",value+0.0L) would result in:
3.99999999999999999999999999999723900000000000000000e+00
(should be same as above)
printf("%.50Le",1/value) would result in
0.00000000000000000000000000000000000000000000000000e+00
(should be close to 1)
printf("%.50Le",(2.0/((value*2.0)*.5))) would result in
5.00000000000000000000000000000345100000000000000000e-01
(should be close to 2)

So, there's definitely a bug somewhere that is generating this
"special" number.  But, for my purposes, that's beside the point right
now.

I've been investigating this problem and ran across the
-funsafe-math-optimizations compile setting.  By turning this
optimization on, we no longer get this special number.  However, if we
were to get this special value again, the math bug would still be
there.  There's very little about what exactly turning on this
optimization does, aside from a very short statement saying that it
may produce code that does not conform to IEEE or ANSI math rules.
What exactly does this mean?  I know on some platforms you can enable
the FPU to use higher-precision numbers while the values exist in the
FPU registers (fp10.obj on windows does this), which could violate
IEEE rules.  Is that the sort of thing that this optimization would
do?  Any insight would be appreciated.

My only real concern with turning this optimization on is if the
representation of a double in memory is changed (for instance, not
being normalized when it should be under IEEE rules).  If it did this,
we could have some serious problems with 3rd party libraries.  I doubt
this is the case, but I just would like to make sure.  Also, the name
of the optimization is a bit... scary.

Thanks for any help regarding this problem.

gcc version: 3.3 (this is the version from SGI freeware http://freeware.sgi.com)
bad value information:
hex representation: 0x3FF0000000000000B9CC000000000000
long double mantissa size (bits): 106
long double size (bytes): 16
long double max exponent (decimal): 308
long double min exponent (decimal): -291

Case Taintor

[Index of Archives]     [Linux C Programming]     [Linux Kernel]     [eCos]     [Fedora Development]     [Fedora Announce]     [Autoconf]     [The DWARVES Debugging Tools]     [Yosemite Campsites]     [Yosemite News]     [Linux GCC]

  Powered by Linux