tom peng writes: > Thanks for correcting my typo on "signed long int 4 8". > > The problem I have here is about simulating a hardware model in C/C++. The > hardware model development was based on 32-bit CPU originally. We had > presumed "signed long int" as 32-bit/4 byte length -- as specified in C/C++ > code "typedef signed long int int32". > > It is apparent that this "int32" in 64-bit ELF executable is of length 8 > bytes, not what we expect. It "might" cause the hw simulation inconsistency > when running in 64 vs. 32 ELF format. > > I hope to get a generic picture about the size difference of C/C++ data type > and their modifiers in 32/64 ELF format compiled by GCC. > > Are those 32-bit / 64-bit difference stipulated in C/C++ standard or ruled > by the GNU/GCC compiler? Neither of those. They are defined by the ABI, at http://refspecs.freestandards.org/elf/x86_64-abi-0.95.pdf Page 12. GCC, as the system compiler, must conform to this specification. It isn't something that we compiler writers get to decide for ourseleves. Andrew.