Re: Floating point performance issue

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 20/12/2011 15:24, Vincent Lefevre wrote:
On 2011-12-20 14:57:16 +0100, David Brown wrote:
On 20/12/2011 14:43, Vincent Lefevre wrote:
I disagree: the operations could be written in an order to avoid some
inaccuracies (such as huge cancellations) or to emulate more precision
(e.g. via Veltkamp's splitting) or to control the rounding (see some
rint() implementation http://sourceware.org/bugzilla/show_bug.cgi?id=602
for instance). On such code, unsafe optimizations could yield problems.

I guess that's why it's an option - then we can choose.

Really, it should have never been an option since it may produce
incorrect code. Such kinds of optimization should have only been
enabled via pragmas, and only in a well-documented manner so that
the developer can know how this code could be transformed (and
only the developer should be allowed to enable such optimizations).


As a general point about unsafe optimisations or ones that cause deviations from the standards, I think it is important that such features are not enabled by default or by any general -O flag - and to my knowledge, gcc always follows that rule.

But I think compiler flags are a suitable choice as they make it easier for the user to apply them to the program as a whole. An example of this would be "-fshort-double" or "-fsingle-precision-constant". These specifically tell the compiler to generate non-conforming code, but are very useful for targets that have floating-point hardware for single-precision but not for double-precision.

I would still say that most floating point code does not need such
control, and that situations where it matters are rather
specialised.

I think that if this were true, there would have never been an
IEEE 754 standard.


The IEEE 754 standard is like many other such standards - it is very useful, perhaps critically so, to some users. For others, it's just a pain.

I work with embedded systems. If the code generators (of gcc or the other compilers I use) and the toolchain libraries conform strictly to IEEE 754, then the code can often be significantly slower with no possible benefits. If I am working with a motor controller, I don't want the system to run slower just to get conforming behaviour if the speed is infinite, the position is denormal and the velocity might be positive or negative zero. I don't care if the motor's position is rounded up or down to the nearest nanometer - but I /do/ care if it takes an extra microsecond to make that decision.

For processors which do not have hardware floating point support, IEEE 754 is often not the best choice of format. It takes time to pack and unpack the fields. But in almost all cases, it's what compilers use - because that's the C standards. Users would often prefer faster but non-conforming floating point, and they certainly don't want to waste time with non-normal numbers.

Again, I state my qualifications - I'm just a user, and have no statistics from other users to back up my claims. My own opinions are my own, of course, but my extrapolations to other users are only based on how I've seen floating point being used.

My believe is that for the great majority of users and uses, floating point is used as rough numbers. People use them knowing they are inaccurate, and almost never knowing or caring about exactly how accurate or inaccurate they are. They use them for normal, finite numbers. Such users mostly do not know or care what IEEE 754 is.

There are, of course, a percentage of users who think of floating point numbers as precise. They are mistaken - and would be mistaken with or without IEEE 754.


But that's just my unfounded opinion - judging from your signature
you /do/ need such tight control in your work, while I've only
learned today that "-ffast-math" has effects other than possibly
changing the generated code.

This is on a different matter, but you can look at

   http://gcc.gnu.org/bugzilla/show_bug.cgi?id=323

(see also the huge number of duplicates). Many people complain
about floating-point when it gives "unexpected" results...


That's probably one of the most common mistakes with floating point - the belief that two floating point numbers can be equal just because mathematically they should be. This "bug" is not, IMHO, a bug - it's a misunderstanding of floating point. Making "-Wfloat-equal" a default flag would eliminate many of these mistakes.


mvh.,

David


[Index of Archives]     [Linux C Programming]     [Linux Kernel]     [eCos]     [Fedora Development]     [Fedora Announce]     [Autoconf]     [The DWARVES Debugging Tools]     [Yosemite Campsites]     [Yosemite News]     [Linux GCC]

  Powered by Linux