Re: Internal representation of double variables - 3.4.6 vs 4.1.0

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, 2007-03-09 at 18:29 +0100, max wrote:
> Hi gcc developers and users,
> 
> I have discovered that my code gives different results if compiled
> with different gcc versions, namely 3.4.6 and 4.1.0.
> Since I wanted to understand why, I compile my code again w/o any
> optimization (-O0) and with debug symbols (-g).
> I have found that differences (very small, 10e-12 on a 32-bit machine)
> started to appear in the return value of a routine which performs
> vector-vector multiplication, i.d.
> 
> double vecdot(double *v)
> {
>   double sum = 0;
>   for(i = 0; i < n; i++)
>     sum += v[i] * v[i];
>   return sum;
> }
> 
> even if elements of v[] are the same. Do these versions use different
> "internal" representation of doubles?
> I agree that the sum above is ill-conditioned, but why do different
> gccs give (w/o optimization) different results?
> 
> Thanks for your help,
> Max

(Slightly off-topic.  But only slightly!)

After reading this, I went off looking for a gcc option enforcing IEEE
floating point behaviour, assuming gcc was like the Intel compilers and
by default sacrificed some exactness in the floating point model for
speed, even with no optimisation.  I could find none.  So, does gcc use
a well-defined and reproducible floating-point model by default?  If
not, can one turn on strict IEEE arithmetic?

Ciao
Terry

-- 
Dr Terry Frankcombe
Physical Chemistry, Department of Chemistry
Göteborgs Universitet
SE-412 96 Göteborg Sweden
Ph: +46 76 224 0887   Skype: terry.frankcombe
<terry@xxxxxxxxxx>


[Index of Archives]     [Linux C Programming]     [Linux Kernel]     [eCos]     [Fedora Development]     [Fedora Announce]     [Autoconf]     [The DWARVES Debugging Tools]     [Yosemite Campsites]     [Yosemite News]     [Linux GCC]

  Powered by Linux