> Weird. Why does gcc think that an "=a" constraint on a 32-bit > variable results in undefined high bits? For the same reason that it thinks that an "=a" constraint on an 8-bit value has undefined high bits. Remember, prior to amd64, partial writes to x86 registers were very much allowed. You can write %ah and %al, then read %ax or %eax. So the assumption was baked into x86 compilers. AMD changed it for the 64-bit registers precisely because such partial register updates are a PITA for OOO implementations. But the compiler still uses the convention that unused high-order bits of a value are not defined. The issue only arises when converting operand sizes, and if GCC used the opposite convention it would have to do the same explicit zeroing on input operands to functions and asm blocks (if they were 64 bits cast to 32 bits). So it's not obviously a in one way or the other. -- To unsubscribe from this list: send the line "unsubscribe linux-x86_64" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html