Re: Floating point performance issue

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 20/12/2011 14:43, Vincent Lefevre wrote:
On 2011-12-20 14:01:19 +0100, David Brown wrote:
There are times when you want IEEE-accurate floating point, because you are
pushing the limits of accuracy, or you want high repeatability.  But
normally when you use floating point, you are accepting a certain amount of
inaccuracy.  If your code is going to produce incorrect answers because of
the way the compiler/cpu rounds calculations, orders your sums, or treats
denormals and other specials, then I would argue that your code is probably
wrong - perhaps you should be using decimal types, long doubles, __float128,
or some sort of multi-precision maths library instead.

I disagree: the operations could be written in an order to avoid some
inaccuracies (such as huge cancellations) or to emulate more precision
(e.g. via Veltkamp's splitting) or to control the rounding (see some
rint() implementation http://sourceware.org/bugzilla/show_bug.cgi?id=602
for instance). On such code, unsafe optimizations could yield problems.


I guess that's why it's an option - then we can choose. I would still say that most floating point code does not need such control, and that situations where it matters are rather specialised. But that's just my unfounded opinion - judging from your signature you /do/ need such tight control in your work, while I've only learned today that "-ffast-math" has effects other than possibly changing the generated code.


[Index of Archives]     [Linux C Programming]     [Linux Kernel]     [eCos]     [Fedora Development]     [Fedora Announce]     [Autoconf]     [The DWARVES Debugging Tools]     [Yosemite Campsites]     [Yosemite News]     [Linux GCC]

  Powered by Linux