Re: Is it OK that gcc optimizes away overflow check?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Agner Fog <agner@xxxxxxxxx> writes:

> I have a program where I check for integer overflow. The program
> failed, and I found that gcc has optimized away the overflow check. I
> filed a bug report and got the answer:
>> Integer overflow is undefined. You have to check before the fact, or compile
>> >  with -fwrapv.  
> ( http://gcc.gnu.org/bugzilla/show_bug.cgi?id=49820 )
>
> I disagree for several reasons:

I see that I've already been quoted in the bug report.  Here I'll just
stress that I think it's important that gcc implement the relevant
standards.  There are arguments on both sides of an issue like whether a
compiler should optimize based on strict overflow.  When facing
arguments on both sides, which should we pick?  When possible and
feasible, we pick the alternative which is written in the standard.
That seems to me to be the most reasonable solution to such a problem.


> 1). It is often easier and more logical to check for overflow after it
> happens than before. It can be quite complicated to write a code that
> predicts an overflow before it happens, in a portable way that works
> with all integer sizes. Checking for overflow after it happens is the
> only way that is sure to work in a hypothetical system that uses
> something else than 2's complement representation.

It's reasonably straightforward to check for overflow of any operation
by doing the arithmetic in unsigned types.  By definition of the
language standard, unsigned types wrap rather than overflow.


> 2). This is a security problem. It takes a very twisted mind to
> predict that your code is not safe when you are actually checking for
> overflow.

I certainly recommend that the security conscious use
-fno-strict-overflow or -Wno-strict-overflow, along with a number of
other options such as -fstack-protector.  gcc serves a number of
different communities, though.  Many programmers have no reason to be
security conscious.  Repeating myself rhetorically, what should be the
default behaviour?  The one documented in the standard.


> 3). I think that you are interpreting the C/C++ standard in an
> over-pedantic way. There are good reasons why the standard says that
> the behavior in case of integer overflow is undefined. 2's complement
> wrap-around is not the only possible behavior in case of
> overflow. Other possibilities are: saturate, signed-magnitude
> wrap-around, reserve a bit pattern for overflow, throw an
> exception. If a future implementation uses internal floating point
> representation for integers then an overflow might variously cause
> loss of precision, INF, NAN, or throw an exception. I guess this is
> what is meant when the standard says the behavior is undefined. What
> the gcc compiler is doing is practically denying the existence of
> overflow (
> http://www.mail-archive.com/pgsql-hackers@xxxxxxxxxxxxxx/msg105239.html
> ) to the point where it can optimize away an explicit check for
> overflow. I refuse to believe that this is what the standard-writers
> intended. There must be a sensible compromize that allows the
> optimizer to make certain assumptions that rely on overflow not
> occurring without going to the extreme of optimizing away an overflow
> check.

It would be interesting to try to write such a compromise.


> 4). The bug in my case disappears if I compile with -fwrapv or
> -fno-strict-overflow or without -O2, but this is not my point. My
> point is that gcc should be useful to a programmer with average
> skills.

There are many many ways to cut yourself when using C++.  Personally I
suspect that a programmer with average skills should stick to Go or an
interpreted language.

Ian


[Index of Archives]     [Linux C Programming]     [Linux Kernel]     [eCos]     [Fedora Development]     [Fedora Announce]     [Autoconf]     [The DWARVES Debugging Tools]     [Yosemite Campsites]     [Yosemite News]     [Linux GCC]

  Powered by Linux