Re: [PATCH] Trivial warning fix for imap-send.c

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On Sun, 12 Mar 2006, A Large Angry SCM wrote:
> 
> 3.2.2.3 Pointers
> 	A pointer to *void* may be converted to or from a pointer to any
> incomplete or object type. A pointer to any incomplete or object type may be
> converted to a pointer to *void* and back again; the result shall compare
> equal to the original pointer.

Large, you're missing the point.

"void *" is guaranteed to be a _superset_ of all pointers.

But that dos not mean that any "void *" pointer can be cast to any other 
pointer. BUT IF IT STARTED OUT AS A POINTER OF THAT TYPE, IT'S GUARANTEED 
THAT IT CAN BE CAST _BACK_ TO THAT TYPE.

(And furthermore, NULL is special in that it will always compare equal 
regardless of how it has ever been cast).

This means, for example, that it's perfectly legal for a C implementation 
to have a 128-bit "void *", where the low bits are the "real pointer" and 
the high 64 bits are the "type descriptor". You could only cast such a 
pointer to that proper type, but you could not cast it to any other type 
(except for the special case of NULL).

My argument boils down to the fact that we don't care one whit about those 
theoretical architectures. It so happens that ia64 function pointers are 
sometimes described this way (due to totally broken reasons - don't ask), 
and that function pointers could indeed be seen as 128-bit quantities. But 
that is such a horribly broken thing, that what compilers on ia64 actually 
do is to instead of having a 128-bit "void *" (which would be legal per 
the standard), they make function pointers actually point to the function 
description (128-bit datum) rather than the actual start of the function.

(I think. I forget the exact details. I think the whole architecture is a 
total mess, and should never have been done in the firstplace).

Similarly, there are certain tagged architectures where the pointer 
actually contains the type it points to, and again, C _allows_ that, and 
if you want to be strictly conforming, you can't do certain things that 
seem obviously correct.

HOWEVER. The undeniable fact is that no sane architecture that anybody 
cares about today (and that, in turn, implies that nobody will care about 
it in the next quarter century - these things have a tendency to 
re-inforce themselves) actually does that. 

Another example is two's complement. C as a language actually allows other 
type representations than two's complement for integers, and there's lots 
of verbiage in the standard about how overflow is undefined etc. Then they 
go to pains to explain how "unsigned" integers are guaranteed to behave as 
if the machine was a regular binary machine, even though the language 
lawyers in general went to great pain to make it clear that if the integer 
representation is binary-packed-decimal, it's still legal from a C 
stanpoint.

But again, nobody sane would ever care. The likelihood that we'll see a 
ternary machine in the next few decades is pretty damn small, because 
while the C standard allows for something else, it would be painful in the 
extreme for anybody to actually convert all the programs that effectively 
depend on 8-bit bytes etc.

So again, in _theory_ the C standard works for some really odd crap out 
there. In practice, there are only certain pretty standard setups (ILP32, 
I32LP64, IL32P64), and some old ones (I16LP32) that nobody cares about, 
and then the really odd ones (36-bit word-addressable monsters where char, 
short, int, long and pointer are all the same size) that have a C 
compiler, but that you will never be able to port _any_ normal program 
to..

In other words, the C standard allows some really strange stuff. Trying to 
even worry about it is just not worth it. It often makes the code just 
much harder to read for absolutely zero gain.

So in practice, the strangest setup you'll ever really care about is 
actually Windows. And it's strange because it can have a totally broken 
size model (IL32LLP64 - although I think that's usually just a compiler 
switch), and because it has such strange system libraries and filesystem 
behaviour (which is sadly more than just a compiler switch).

Even windows (or, perhaps, Windows _in_particular_) will never have things 
like a "char" that isn't 8 bits, etc that could be possible in theory if 
you were to just read the C standard.

		Linus
-
: send the line "unsubscribe git" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]