Re: [PATCH] blame.c: don't drop origin blobs as eagerly

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Jeff King <peff@xxxxxxxx> writes:

> On Wed, Apr 03, 2019 at 07:06:02PM +0700, Duy Nguyen wrote:
>
>> On Wed, Apr 3, 2019 at 6:36 PM Jeff King <peff@xxxxxxxx> wrote:
>> > I suspect we could do even better by storing and reusing not just the
>> > original blob between diffs, but the intermediate diff state (i.e., the
>> > hashes produced by xdl_prepare(), which should be usable between
>> > multiple diffs). That's quite a bit more complex, though, and I imagine
>> > would require some surgery to xdiff.
>> 
>> Amazing. xdl_prepare_ctx and xdl_hash_record (called inside
>> xdl_prepare_ctx) account for 36% according to 'perf report'. Please
>> tell me you just did not get this on your first guess.
>
> Sorry, it was a guess. ;)
>
> But an educated one, based on previous experiments with speeding up "log
> -p". Remember XDL_FAST_HASH, which produced speedups there (but
> unfortunately had some pathological slowdowns, because it produced too
> many collisions). I've also played around with using other hashes like
> murmur or siphash, but was never able to get anything remarkably faster
> than what we have now.
>
>> I tracked and dumped all the hashes that are sent to xdl_prepare() and
>> it looks like the amount of duplicates is quite high. There are only
>> about 1000 one-time hashes out of 7000 (didn't really draw a histogram
>> to examine closer). So yeah this looks really promising, assuming
>> somebody is going to do something about it.
>
> I don't think counting the unique hash outputs tells you much about what
> can be sped up. After all, two related blobs are likely to overlap quite
> a bit in their hashes (i.e., all of their non-unique lines). The trick
> is finding in each blob those ones that _are_ unique. :)
>
> But if we spend 36% of our time in hashing the blobs, then that implies
> that we could gain back 18% by caching and reusing the work from a
> previous diff (as David notes, a simple keep-the-last-parent cache only
> yields 100% cache hits in a linear history, but it's probably good
> enough for our purposes).

Of course, if you really want to get tricky, you'll not even compare
stuff that is expanded from the same delta-chain location.  Basically,
there are a number of separate layers that are doing rather similar work
with rather similar data, but intermingling the layers' work is not
going to be good for maintainability.  Caching at the various layers can
keep their separation while still reducing some of the redundancy costs.

-- 
David Kastrup



[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]

  Powered by Linux