On Tue, 17 Oct 2006, Davide Libenzi wrote: > > Ehm, I think there's a little bit of confusion. The incorrect golden ratio > prime selection for 64 bits machines was coalescing hash indexes into a > very limited number of buckets, hence creating very bad performance on diff > operations. The result of the diff would have been exacly the same, just > coming out after the time for a cup of coffee and a croissant ;) But my point is, you would have been better off _without_ an algorithm that cared about the word-size at all, or with just using "uint32_t". See? Yes, a "unsigned long" has more bits for hashing on a 64-bit architecture. But that's totally the wrong way of thinking about it. YOU DO NOT WANT MORE BITS! You want the same damn answer regardless of architecture! A diff algorithm that gives different answers on a 32-bit LE architecture than on a 64-bit BE architecture is BROKEN. If I run on x86-64, I want the same answers I got on x86-32, and the same ones I get on ppc32. Anything else is SIMPLY NOT ACCEPTABLE! So the whole idea that you should have used 64-bit values was broken, broken, broken. You should never have had anything that cared, because anything that cares is by definition buggy. This is why we should use the _low_ bits. Never the high bits. Linus - To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html