On 25-07-2011 08:04, Ian Lance Taylor wrote:
There are arguments on both sides of an issue like whether a compiler
should optimize based on strict overflow. When facing arguments on
both sides, which should we pick? When possible and feasible, we pick
the alternative which is written in the standard. That seems to me to
be the most reasonable solution to such a problem.
My point is that you are over-interpreting the standard when you
conclude that the compiler is allowed to do anything in case of overflow.
It's reasonably straightforward to check for overflow of any operation
by doing the arithmetic in unsigned types. By definition of the
language standard, unsigned types wrap rather than overflow.
This is still optimized away without warning:
#include <stdlib.h>
int func(int x) {
int y = abs(x);
if ((unsigned int)y > ~(0u) >> 1) y = 123;
return y;
}
Unsigned and signed don't overflow at the same point. There is no
straightforward way you can convert the overflow of the abs() function
to an unsigned wraparound.
Is this what you call reasonably straightforward:
int x, y;
if ((unsigned int)x == ~(~0u >> 1)) { /* deal with overflow */}
else y = abs(x);
The code will become ugly and unreadable if you fill it with checks like
this. And it still relies on the 2's complement, which is not guaranteed
by the standard.
I certainly recommend that the security conscious use
-fno-strict-overflow or -Wno-strict-overflow, along with a number of
other options such as -fstack-protector. gcc serves a number of
different communities, though. Many programmers have no reason to be
security conscious. Repeating myself rhetorically, what should be the
default behaviour? The one documented in the standard.
You don't know that you need to be security conscious until it is too
late :-)
3). I think that you are interpreting the C/C++ standard in an
over-pedantic way. There are good reasons why the standard says that
the behavior in case of integer overflow is undefined. 2's complement
wrap-around is not the only possible behavior in case of
overflow. Other possibilities are: saturate, signed-magnitude
wrap-around, reserve a bit pattern for overflow, throw an
exception. If a future implementation uses internal floating point
representation for integers then an overflow might variously cause
loss of precision, INF, NAN, or throw an exception. I guess this is
what is meant when the standard says the behavior is undefined. What
the gcc compiler is doing is practically denying the existence of
overflow (
http://www.mail-archive.com/pgsql-hackers@xxxxxxxxxxxxxx/msg105239.html
) to the point where it can optimize away an explicit check for
overflow. I refuse to believe that this is what the standard-writers
intended. There must be a sensible compromize that allows the
optimizer to make certain assumptions that rely on overflow not
occurring without going to the extreme of optimizing away an overflow
check.
It would be interesting to try to write such a compromise.
I think it would be more sound to use pragmas than command line options.
A pragma could be placed precisely at the place in the code where there
is a problem, telling whether overflow should be ignored or not. If you
apply a command line option to a specific module somewhere in the
makefile of a big project, other people working on the same project
would not know why it is there and it could easily be messed up when the
project is restructured.
The compiler could either use the safe options by default and produce
warning messages at missed optimization opportunities, or use unsafe
options by default and produce warning messages when it makes unsafe
optimizations.
There are many many ways to cut yourself when using C++. Personally I
suspect that a programmer with average skills should stick to Go or an
interpreted language.
I don't think Go is mature enough to be the first choice of beginners.
The same applies to D.
Java, C#, VB and the like are terribly slow in my opinion.