Internal representation of double variables - 3.4.6 vs 4.1.0

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi gcc developers and users,

I have discovered that my code gives different results if compiled
with different gcc versions, namely 3.4.6 and 4.1.0.
Since I wanted to understand why, I compile my code again w/o any
optimization (-O0) and with debug symbols (-g).
I have found that differences (very small, 10e-12 on a 32-bit machine)
started to appear in the return value of a routine which performs
vector-vector multiplication, i.d.

double vecdot(double *v)
{
 double sum = 0;
 for(i = 0; i < n; i++)
   sum += v[i] * v[i];
 return sum;
}

even if elements of v[] are the same. Do these versions use different
"internal" representation of doubles?
I agree that the sum above is ill-conditioned, but why do different
gccs give (w/o optimization) different results?

Thanks for your help,
Max

[Index of Archives]     [Linux C Programming]     [Linux Kernel]     [eCos]     [Fedora Development]     [Fedora Announce]     [Autoconf]     [The DWARVES Debugging Tools]     [Yosemite Campsites]     [Yosemite News]     [Linux GCC]

  Powered by Linux