Andrew Haley wrote:
We know that any calculation involving NaN returns false, right? So,
simple cases can be done first without the (possibly expensive) move
out of the FPU:
if (x < y)
return -1;
if (x > y)
return 1;
then you can do the doubleToLongBits:
long lx = doubleAsLongBits(x);
long ly = doubleAsLongBits(y);
if (lx == ly)
return 0;
At this point we know they're unequal. Also, either one of them is
NaN, or one of them is -0 and the other +0. Java's doubleToLongBits
alwayse uses the canonical NaN, so we know that they can't both be
NaN.
if (lx < ly)
return -1;
return 1;
Hi Andrew,
this is great. I'd forgotten about the NaN test buried in
doubleToLongBits, and I like the logic to push the > and < tests to the
top. Using these insights I've tried to figure if there's a better way
than this and I just wonder whether as the long bits conversions do a
NaN test is the following cheaper:
// handle the easy cases:
if (x < y)
return -1;
if (x > y)
return 1;
// handle equality respecting that 0.0 != -0.0 (hence not using x == y):
int ix = floatToRawIntBits(x);
int iy = floatToRawIntBits(y);
if (ix == iy)
return 0;
// handle NaNs:
if (x != x)
return (y != y) ? 0 : 1;
else if (y != y)
return -1;
// handle +/- 0.0
return (ix < iy) ? -1 : 1;
Although its almost certain a VM can do better with a VM specific
implementation. For example, on Intel you can reuse the fact that a
floating point compare gives you unordered information (in fact in many
cases you need to test for this anyway). I imagine GCC is better at
doing this kind of machine specific optimization than most JITs are.
Regards,
Ian