On Wed, Nov 27, 2019 at 10:52:02PM +0000, Ramsay Jones wrote: > > I decided to just test the 'luc/next' branch (commit 4a8aa8d1 cgcc: add > support for riscv64). :-P Hehe, quite wise :) > I have only tested on 64-bit Linux and Cygwin (the sparse testsuite and > running it over git), so far with no issues. > > I have also compared the output of: > $ ./cgcc -dM -E - </dev/null | sort >sss > $ gcc -dM -E - </dev/null | sort >ggg > $ meld ggg sss > on both Linux and cygwin (with similar results). > > I have ignored the 'float stuff', since the output of gcc and sparse > is almost totally different! :( Yes, the 'TYPE' and 'SIZEOF' predefines should be correct but I ignore the remaining (which is generated by cgcc, not sparse itself). > The main difference, which is new, is the spelling of the 'type names'. > e.g. __CHAR16_TYPE__ is given as 'short unsigned int' by gcc but > 'unsigned short' by sparse. The following table shows the 'type name' > differences: > > CHAR16_TYPE short unsigned int => unsigned short > INT16_TYPE short int => short > INT64_TYPE long int => long > INTMAX_TYPE long int => long > INTPTR_TYPE long int => long > PTRDIFF_TYPE long int => long > SIZE_TYPE long unsigned int => unsigned long > UINT16_TYPE short unsigned int => unsigned short > UINT64_TYPE long unsigned int => unsigned long > UINTMAX_TYPE long unsigned int => unsigned long > UINTPTR_TYPE long unsigned int => unsigned long I was a bit surprised by the 'new' aspect as sparse itself outputs these names since last December (IIRC) but yes these were overwritten by cgcc until the patch that removed cgcc's integer_types(): fba1931d2 ("cgcc: removed unneeded predefines for integers") But, yes, they're different, sparse just uses show_typename() for it. I've already wondered why GCC issues them like this Well, I see that GCC's way inhibit something like: INTMAX_TYPE double var; so, maybe sparse should do the same for the predefines. > sparse seems to '#define linux linux' rather than '#define linux 1'. Funny, I wonder why. I never noticed. I think that it should not be defined at all (and used also). > sparse defines __LITTLE_ENDIAN__ but gcc does not. Yes, indeed. Well, GCC defines it for some archs/OSes: * on the *BSD * on ppc64le * probably on all platforms using big-endian by default when -mlittle-endian is used. > On cygwin, the results are similar to the above, with the addition > of the following: > > WCHAR_TYPE short unsigned int => unsigned short > Also, sparse defines __CYGWIN32__ when it shouldn't (this is on > x86_64 cygwin, without -m32 etc). Yes, I noticed a few days ago. I tried to fix it but this part of cgcc is bitness agnostic. Doing a #undef in sparse itself would be easier. Thank you very much! -- Luc