Hi everyone,
I was tracking a bug and came across the code in Float.compare which is
very similar to Double.compare. It just so happened that the bug was
being caused by sorting arrays containing lots of floating point 0s.
This occurs quite frequently in the Jikes RVM when we have an array of
branch profile data. So the current code looks like:
public static int compare(float x, float y)
{
if (isNaN(x))
return isNaN(y) ? 0 : 1;
if (isNaN(y))
return -1;
// recall that 0.0 == -0.0, so we convert to infinities and try again
if (x == 0 && y == 0)
return (int) (1 / x - 1 / y);
if (x == y)
return 0;
return x > y ? 1 : -1;
}
When comparing 0 with 0 we do two NaN tests, two tests with 0, two
floating point divides, floating point subtract and a float to int. An
alternate way of coding the case of (x == 0 && y == 0) would be:
if (x == 0 && y == 0)
return (floatAsIntBits(x)>>31) - (floatAsIntBits(y) >> 31)
The shift basically turns the ints being subtracted into 0 or -1, 0 in
the case the value is +ve and -1 when it's negative. This changes the
cost of 2 float divides, float subtract and float to int, into a cost of
2 float as ints, 2 int shifts, 1 int subtract. On x86 the float to int
is expensive (set and restore control bits yadda yadda), where as a
float as int is just a float store and then int load. I think the latter
pattern could be quite a lot cheaper.
In any case, it seems that this code is quite performance important and
possibly the VM could do some clever optimizations. I wonder if it
wouldn't be better to move compare into VMFloat and VMDouble?
Regards,
Ian