I've been doing some work with floating point numbers and stumbled across some peculiar results and I'm not sure where to go from here. The code below generates this output on RedHat 2.1 ix86 using g++ 2.96, 3.2 & 3.41 original -79.937384 int cast: -79937383 lrint: -79937384 trunc: -79937384 Using g++ 3.2 on MacOSX, and g++ 2.96 on Solaris 2.7 and g++ 3.2 on Solaris 2.8 as well as .net studio on windows xp all return -79937384 for int cast. I'm stumped as to what is causing this error to occur for int cast, especially when lrint and trunc seem to work properly. Any insight anyone has would be much appreciated. #include <stdio.h> #include <math.h> int main(int argc, char *argv[]) { int iTmp = 0; double dTmp =-79.937384; printf("original %.6f\n",dTmp); iTmp = (int)(dTmp * 1000000.0); printf("int cast: %d\n",iTmp); iTmp = llrint(dTmp * 1000000.0); printf("lrint: %d\n",iTmp); iTmp = trunc(dTmp * 1000000.0); printf("trunc: %d\n\n",iTmp); }//end int main(int argc, char *argv[]) Joe