Re: how to make gcc warn about arithmetic signed overflow

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, 24 Sep 2013 18:48:08 +0100
Andrew Haley <aph@xxxxxxxxxx> wrote:

> > Regardless of optimization, the CPU, not
> > the compiler, executes the ADD or MUL operation, or whatever, and
> > sets or does not set the overflow bit accordingly, right?  Why
> > can't the compiler generate code that senses that, and raises a
> > runtime error?
> 
> Because the compiler does a lot of rewriting.  There is not a one-to-
> one mapping between operations in your source program and
> instructions.  An operation might occur in your program but not in the
> object code.  For example, say you do this:
> 
>    int n = m + BIG_NUMBER;
>    return n - BIG_NUMBER;
> 
> There is an overflow in your source, but not in the object code.  So
> no trap will occur.

I thought that's what you meant.  I was confused by "in your source"
because of course source code doesn't overflow.  (Well, I've seem some
code bases that I'd describe that way, but that's a different issue!)  

You mean that a naïve rendering of the source code implies an overflow
where none might exist in the actual emitted object code.  And,
presumably, the converse: that even if the source is written such that
there logically can't be an overflow, the compiler might render object
code that does.  

As far as I'm concerned, that's neither here nor there.  When the
compiler is done, there is object code that does execute on a real CPU
and does -- on some architectures -- set an overflow bit in the status
word for overflowing integer operations.  

I saw recommendations here for -ftrapv, said to be broken (?), defined
only for signed integer operations, and -gnato, which afaict is for
Ada.  So in C and C++ there appears to be no option to utilize the
processor's integer overflow status bit.  

> > I've written a lot of SAFE_CAST macros that check the return of
> > sizeof or strlen(3) before casting it to an int and assigning the
> > result to something that *must* be an int.  That code is terribly
> > inefficient, clumsy to read, noise on the screen, really.  But made
> > necessary IMO because the compiler conceals what the processor
> > reports.  
> 
> I'm not quite sure what you mean by this.  Why would you want to cast
> it to an int, anyway?  Desperately short of space?

Many communication protocols use 16 or 32 bits to represent a value
that will *surely* fit, such as the size of a packet.  Compute that
size with sizeof or strlen, and suddenly you're in size_t space.
Unless there's an error, it's perfectly safe to assign the results to
type required by the protocol.  Me, I'm conservative in what I emit;
I trust but verify.  

It's also not very hard to find libraries in common use -- some recently
defined, sadly -- that use int for lengths.  ODBC and Apache Thrift
come to mind.  

--jkl





[Index of Archives]     [Linux C Programming]     [Linux Kernel]     [eCos]     [Fedora Development]     [Fedora Announce]     [Autoconf]     [The DWARVES Debugging Tools]     [Yosemite Campsites]     [Yosemite News]     [Linux GCC]

  Powered by Linux