Is there a special reason why I can't directly access the exponent value inside an x86 long double? I've written some library templates mapping bitfields composed from templated bitsizes to union fields containing any ieee 754 floats. I knew that x86 "long double" is a special case, specialized it. Initializing the union with a "long double" (default compilation flags - doc says 80 bits then) with M_PIl constant, found my mantissa field reads out ok. However, "sign" bit is true for positive value of pi, and (biased) "exponent" value seems pretty random, wrong value. It should be 1 (max power of 2 in "3") + 16383 (15b bias) = 16384 = 0x4000. This has come to my attention while running a unit test which is parameterized by type. "float" and "double" both clear OK, but "long double" has this change behaviour to failing when exponent correctness is verified. Thanks for any pointers in helping me understand more about this issue. Cristiano.