Re: Strange enum type conversion

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Dear Richard and John,

With your help I solved my initial problem and I think I have a rough
understanding what happens. However, I don't know really why it
happens...
You write:

"An int is signed only if it's smaller than INT_MAX in magnitude.
Otherwise it's unsigned (hence #defining INT_MIN as (-INT_MAX - 1)"

Is this defined somewhere in the C++ standard (where?) or is this gcc
specific? I was told that e. g. in MS Visual C++ 0xffffffff is
interpreted as -1.

And what exactly happens, when the numbers in an enum are too large to
fit into an signed int? The compiler tries an unsigned int, then an long
int then unsigned long and so on? Is this defined in the C++ standard or
gcc specific?
And is there a possibility to find out whether enum elements have signed
or unsigned type?

Thank you very much again,

Hajo

 

Am Donnerstag, den 03.12.2009, 15:25 +0000 schrieb Richard Earnshaw:
> On Thu, 2009-12-03 at 14:40 +0000, Richard Earnshaw wrote:
> 
> > > 0xFFFFFFFF (which is an unsigned int)
> > 
> > wrong.  That's an int with a bit pattern that's all 1's.  On a two's
> > complement machine with 32-bit ints that's equivalent to -1.
> 
> Forget that.  I'm talking nonsense.  An int is signed only if it's
> smaller than INT_MAX in magnitude.  Otherwise it's unsigned (hence
> #defining INT_MIN as (-INT_MAX - 1)
> 
> R.



[Index of Archives]     [Linux C Programming]     [Linux Kernel]     [eCos]     [Fedora Development]     [Fedora Announce]     [Autoconf]     [The DWARVES Debugging Tools]     [Yosemite Campsites]     [Yosemite News]     [Linux GCC]

  Powered by Linux