[breaking the thread since we're way off the original topic, and this subject might get more interesting attention] On Wed, Jul 31, 2019 at 10:17:01AM -0700, Junio C Hamano wrote: > Jeff King <peff@xxxxxxxx> writes: > > > And here it is for reference with the matching change in test-oidmap, > > and the adjustments necessary for the test scripts (from master, not > > from my earlier patch). I think I prefer the simpler "just sort it all" > > version I posted with the commit message. > > Yeah, let's go with that version. > > I am wondering if we should follow suit to certain language's hash > implementation to make sure the iteration order is unpredictable to > catch bad scripts ;-) Perhaps that is not worth it, either. That would be a nice side effect, but the real benefit is that it makes hash-collision denial-of-service attacks harder. I experimented with this some when I looked at swapping out the xdiff hash algorithm[1] for murmur, siphash, or similar, but I could never get the performance quite on par with what we have now. I haven't pursued randomization that much because git's diff engine is a ready-made DoS machine in the first place. If you're diffing untrusted input, you have to be ready to cut it off after using too much CPU and say "nope, this one is too big". That may be less true for our general purpose hashmap, though. -Peff [1] I didn't dig up the emails, but this was several years ago, when we realized that XDL_FAST_HASH didn't actually work well. I recently found out about xxhash: https://cyan4973.github.io/xxHash/ which looks promising. But I haven't gotten around to plugging it in and timing the result. Anybody else is welcome to beat me to it. :) IIRC, one really tricky thing about our diff code is that it finds the newline for each line while it's hashing. That makes it hard to plug in an existing hash implementation without going over each line twice, and many of them perform poorly if you hand them a byte at a time.