Re: difference in calculation result when using gcc vs Visual studio and optimisation flag

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 2018-04-26 12:53 +0200, Mason wrote:
> On 26/04/2018 11:26, Jack Andrews wrote:
> 
> > This reduces it to a one-platform problem.  gcc -O different to gcc
> > Maybe I'm being idealistic, but why should optimization change results?
> 
> Because, for example on x86 platforms, 'gcc -O0' will use the 80-bit
> x87 stack regs, while 'gcc -O2' will use the 64-bit SSE regs.
> 
> It is better to give up the notion that floating point computation
> are exact, and accept the fact that small errors do change the
> results on different implementations (and as pointed out, even on
> the same implementation with different options).
> 
> It is also worth pointing out that sometimes these small errors
> accumulate into huge errors. Floating point is tricky.
> 
> cf. https://en.wikipedia.org/wiki/Loss_of_significance
> 
> Regards.

A new PR323-like issue.

cf. http://gcc.gnu.org/wiki/FAQ#PR323
-- 
Xi Ruoyao <ryxi@xxxxxxxxxxxxxxxxx>
School of Aerospace Science and Technology, Xidian University



[Index of Archives]     [Linux C Programming]     [Linux Kernel]     [eCos]     [Fedora Development]     [Fedora Announce]     [Autoconf]     [The DWARVES Debugging Tools]     [Yosemite Campsites]     [Yosemite News]     [Linux GCC]

  Powered by Linux