On Thu, 2010-11-11 at 21:57 -0800, 0throot wrote: > In one of my programs, I found a wierd problem with float multiplication. Any > insights you can provide will be very helpful. > > The program is as follows, > > #include <stdio.h> > int main(void) { float a=4097, b=4097, c=0; c=a*b; printf("%12.2f != %d\n", > c, 4097*4097); return 0; } > > The output i get is, > > 16785408.00 != 16785409 > > I never thought using float over int could have such adverse effects. > > Is this the correct behavior ? or am i doing something wrong here ? > > Following are the system details, > > gcc version 4.4.2 20091027 (Red Hat 4.4.2-7) (GCC) > Fedora release 12 (Constantine) > Linux lap.local 2.6.31.5-127.fc12.i686.PAE #1 SMP Sat Nov 7 21:25:57 EST > 2009 i686 i686 i386 GNU/Linux > > 0/ GNU/Linux on x86 hardware uses a 32-bit float, which has only 24 bits of resolution. 2^24 = 16,777,216. 4,097 * 4,097 = 16,785,409 > 16,777,216. You're seeing round-off error. Try using doubles instead of floats (but you may still get round-off error.) Be aware, though, that floating point numbers use part of their bits for the exponent. So for a given number of bits, they have less resolution than integers. The trade off is range versus resolution. A common mistake among beginning programmers is to think of floats as reals. They are not. They are discrete values, which are not evenly distributed throughout their range. --Bob