This seems odd to me too. I realize that floating point is not exact
but that inexactness should be the same in both methods of your
algorithms.
My suggestion is to dump out the listing of the code and look at it.
The instruction sequence must be different and that may clue you in
on why the results are different. I'm curious what you find.
Perry
On Mar 22, 2006, at 3:05 PM, dups41@xxxxxxxxx wrote:
Hi,
On x86 the following code intended to calculate log2(4096) gives an
unexpected result of 11 when the value is cast directly to an int. If
the result is stored in a double and then cast to an int the expected
value (12) is given.
double r0;
double d_msb;
int i_msb;
r0=4096;
i_msb = (int)(log10(r0)/log10(2));
printf("%d\n",i_msb);
d_msb = (log10(r0)/log10(2));
i_msb = (int)d_msb;
printf("%d\n",i_msb);
# uname -srm
Linux 2.4.9-34smp i686
# g++ -o log log.c
# ./log
11
12
I have tried several versions of gcc on x86 and all give the same
behaviour. (3.0, 3.0.1, 3.0.4, 3.2, 3.2.2, 3.3.1)
On Solaris 8 this code gives the expected output (12 12). AMD64 with
-m64 gives the expected results but with -m32 the results differ (11
12).
Why do the two values differ? If it is the case that a
rounding/precision error is causing an off by one result, should it
not be consistent between the two forms of the code?
Thanks,
Andrew