On Wed, Apr 03, 2019 at 07:06:02PM +0700, Duy Nguyen wrote: > On Wed, Apr 3, 2019 at 6:36 PM Jeff King <peff@xxxxxxxx> wrote: > > I suspect we could do even better by storing and reusing not just the > > original blob between diffs, but the intermediate diff state (i.e., the > > hashes produced by xdl_prepare(), which should be usable between > > multiple diffs). That's quite a bit more complex, though, and I imagine > > would require some surgery to xdiff. > > Amazing. xdl_prepare_ctx and xdl_hash_record (called inside > xdl_prepare_ctx) account for 36% according to 'perf report'. Please > tell me you just did not get this on your first guess. Sorry, it was a guess. ;) But an educated one, based on previous experiments with speeding up "log -p". Remember XDL_FAST_HASH, which produced speedups there (but unfortunately had some pathological slowdowns, because it produced too many collisions). I've also played around with using other hashes like murmur or siphash, but was never able to get anything remarkably faster than what we have now. > I tracked and dumped all the hashes that are sent to xdl_prepare() and > it looks like the amount of duplicates is quite high. There are only > about 1000 one-time hashes out of 7000 (didn't really draw a histogram > to examine closer). So yeah this looks really promising, assuming > somebody is going to do something about it. I don't think counting the unique hash outputs tells you much about what can be sped up. After all, two related blobs are likely to overlap quite a bit in their hashes (i.e., all of their non-unique lines). The trick is finding in each blob those ones that _are_ unique. :) But if we spend 36% of our time in hashing the blobs, then that implies that we could gain back 18% by caching and reusing the work from a previous diff (as David notes, a simple keep-the-last-parent cache only yields 100% cache hits in a linear history, but it's probably good enough for our purposes). This should likewise make "git log -p -- file" faster, though with more files you'd need a bigger cache. So I do think it's a promising lead. I don't have immediate plans to work on it, though. Maybe it would be a good GSoC project. ;) -Peff