Gcc does optimizations based on knowing that signed integer overflow
is undefined behavior. It may not catch conversion right now,
but given time, it will.
This surprises me. My understanding was that the result of
a conversion from an unsigned integer type to a signed
integer type, when the unsigned value doesn't fit into the
range of the signed type, is merely implementation defined
rather than undefined behaviour.
Your understanding is correct.
Section 4.5 of gcc's manual
seems to say that gcc chooses to wrap modulo 2**(width
of the signed type) in this case. Is this likely to change
in future gcc versions?
It is documented behaviour, so it is unlikely to change any time
soon; it isn't likely to change until we all stop using two's
complement, anyway ;-)
Segher