Hi gcc developers and users, I have discovered that my code gives different results if compiled with different gcc versions, namely 3.4.6 and 4.1.0. Since I wanted to understand why, I compile my code again w/o any optimization (-O0) and with debug symbols (-g). I have found that differences (very small, 10e-12 on a 32-bit machine) started to appear in the return value of a routine which performs vector-vector multiplication, i.d. double vecdot(double *v) { double sum = 0; for(i = 0; i < n; i++) sum += v[i] * v[i]; return sum; } even if elements of v[] are the same. Do these versions use different "internal" representation of doubles? I agree that the sum above is ill-conditioned, but why do different gccs give (w/o optimization) different results? Thanks for your help, Max