Manan Chopra <mchopra@xxxxxxxx> writes: > I am a Physics student. We in our research group have been > developing a molecular simulation software. > So to make our code run fast we are using gcc optimizations but i have > got a case in which gcc shows different behaviour when run with or > without optimization. > And i am not very sure how to correct this situation. > > Below is the part of code in which i am seeing this behaviour. THIS IS > BUGGY LINE is the exact line which gives different results with and > without optimization. > The value of x[i] at that point is 1.5(exact) value of box_xh is 2.1 > and rnx is 0.6 so icelx should evaluate to 6. which it does if i dont > use optimization but when i switch optimization it gives icelx = > 5. which is not correct. One moe thing i should mention real is double > in my code. You neglected to mention what type of machine you are running on. I would guess that you are using some sort of x86 architecture. On the x86 the type double is 64 bits, but floating point computation is done by default in 80 bit registers. The difference in precision can cause unexpected effects. In particular neither 0.6 nor 2.1 can be precisely represented in a binary floating point format. When compiling without optimization, the values are forced back to 64-bits on the stack at each step. When compiling with optimization this does not happen. With optimization the computation using values in 80-bit precision yields a value just smaller than 6. When you then call floor, this is reduced to 5. The quick workaround is to use the -ffloat-store option, which will probably fix your immediate problem. Or if you have a sufficiently new processor and compiler you could try -mfpmath=sse. Medium term you should avoid discontinuous functions like floor and you should avoid comparing floating point values for equality--instead check whether they differ by some appropriately small epsilon. Longer term I would recommend a better understanding of computer approximations of floating point arithmetic. Ian