Re: [PATCH] git gc: Speed it up by 18% via faster hash comparisons

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



* Jonathan Nieder <jrnieder@xxxxxxxxx> wrote:

> Ingo Molnar wrote:
> 
> > Most overhead is in hashcmp(), which uses memcmp(), which falls back to 
> > assembly string operations.
> >
> > But we know that hashcmp() compares hashes, which if they do not match, the first byte
> > will differ in 99% of the cases.
> >
> > So i tried the patch below: instead of relying on GCC putting in the string 
> > ops, i used an open-coded loop for this relatively short comparison, which does 
> > not go beyond the first byte in 99% of the cases.
> [...]
> > --- a/cache.h
> > +++ b/cache.h
> > @@ -675,14 +675,33 @@ extern char *sha1_pack_name(const unsigned char *sha1);
> [...]
> > +static inline int hashcmp(const unsigned char *sha1, const unsigned char *sha2)
> >  {
> > -	return memcmp(sha1, sha2, 20);
> > +	int i;
> > +
> > +	for (i = 0; i < 20; i++, sha1++, sha2++) {
> 
> Hm.  This would be very sensitive to the compiler, since a too-smart 
> optimizer could take this loop and rewrite it back to memcmp! So I wonder if 
> it's possible to convey this to the compiler more precisely:
> 
> 	return memcmp_probably_differs_early(sha1, sha2, 20);
> 
> E.g., how would something like
> 
> 	const unsigned int *start1 = (const unsigned int *) sha1;
> 	const unsigned int *start2 = (const unsigned int *) sha2;
> 
> 	if (likely(*start1 != *start2)) {
> 		if (*start1 < *start2)
> 			return -1;
> 		return +1;
> 	}
> 	return memcmp(sha1 + 4, sha2 + 4, 16);
>
> perform?

Note that this function wont work on like 99% of the systems out there due to 
endianness assumptions in Git.

Also, your hypothetical smart compiler would recognize the above as equivalent 
to memcmp(sha1, sha2, 20) and could rewrite it again - so we'd be back to 
square 1.

As i argued in my other mail in this thread, it's hard to keep a compiler from 
messing up its assembly output if it really wants to mess up - we'll deal with 
it when that happens. I used a very fresh compiler and a modern CPU for my 
testing - so even if very, very new compilers improve this problem somehow it 
will stay with us for a long, long time.

Having said that, it would be nice if someone could test these two patches on a 
modern AMD box, using the perf stat from here:

  http://people.redhat.com/mingo/tip.git/README

  cd tools/perf/
  make -j install

and do something like this to test git gc's performance:

  $ perf stat --sync --repeat 10 ./git gc

... to see whether these speedups are generic, or somehow Intel CPU specific.

> I suspect we don't have to worry about endianness as long as hashcmp
> yields a consistent ordering, but I haven't checked.

Well i messed up endianness in an early version of this patch and 'git gc' was 
eminently unhappy about it! I have not figured out which part of Git relies on 
the comparison result though - most places seem to use the result as a boolean.

Thanks,

	Ingo
--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]