>C tries to generate integers by default, so when you write that number, >which is larger than the signed integer boundary, it will instead >generate an unsigned integer instead and let you know about it. You >should be able to get rid of the warning by writing it as 2147483648U (U >for unsigned). > Yes. See C++ Std 2.13.1 for more info on this. However, I'd like to point out that the standard doesn't guarantee that 2147483648 is representable by an integer or that an integer is 4 bytes in length - that is implementation dependent (really, CPU/platform dependent - see 3.9.1). On a 16-bit machine, with a compiler that has 4 byte long integers, you may need to write "2147483648UL". You didn't mention the platform, or the exact compiler you're using - I assumed g++, but, from the extensions in the example below, maybe it was just the C compiler. But I have a question - how does one write this portably (in C++)? typedef unsigned int uint_32_t ; // similar to C99 types and common practice ... uint_32_t squiggy( 2147483648 ) ; // Does this work w/o generating a warning? ... if the above code still generates a warning, then does one resort to the following? : // Macro definition changes with platform, depending on length of integer #define UINT32_LITERAL( x ) xUL ... uint_32_t squiggy( UINT32_LITERAL( 2147483648 ) ) ; ... >On 12/04/2006 03:48 PM, Trevis Rothwell wrote: >> Given the following program: >> >> int main() >> { >> unsigned int squiggy = 2147483648; >> } >> >> Compliing on GCC 3.2.3, I get the following warning: >> >> $ gcc foo.c >> foo.c: In function `main': >> foo.c:3: warning: decimal constant is so large that it is unsigned >> >> To the best of my knowledge, 2147483648 should fit with ample room to >> spare in an unsigned (4-byte) integer. >> >> What's going on? >>