Hello, Converting an unsigned integer to a signed integer is easy in C. The implicit conversion works fine. However, converting from a signed integer to an unsigned integer invokes implementation defined behaviour (or can raise an implementation defined signal) if "the value cannot be represented in it". So I wrote some simple code to convert a uint_fast32_t to an int_fast32_t. if (positive > (int_fast64_t) INT32_MAX) { return -((UINT32_MAX - (int_fast64_t) positive) + 1u); } return positive; Unfortunately, GCC does not optimize this away the comparison and calculations on X86_64. This is not a bug report. I have not checked to see if the latest version of GCC catches this. Besides, Clang does not catch this either and so maybe this optimization would be difficult to implement. What I want to know is, how can I make this code friendly to GCC's optimizer? Maybe if I need the best performance I would need to rely upon implementation defined behaviour or use a builtin intrinsic. But before I look into those options I thought I'd check with the mailing list.