Re: [PATCH] git gc: Speed it up by 18% via faster hash comparisons

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Apr 28, 2011 at 12:30 PM, Pekka Enberg <penberg@xxxxxxxxxxxxxx> wrote:
> Hi,
>
> On 4/28/11 1:19 PM, Erik Faye-Lund wrote:
>>
>> On Thu, Apr 28, 2011 at 12:10 PM, Pekka Enberg<penberg@xxxxxxxxxxxxxx>
>>  wrote:
>>>
>>> On 4/28/11 12:50 PM, Erik Faye-Lund wrote:
>>>>>
>>>>> Alas, i have not seen these sha1 hash buffers being allocated unaligned
>>>>> (in my
>>>>> very limited testing). In which spots are they allocated unaligned?
>>>>
>>>> Like I said above, it can happen when allocated on the stack. But it
>>>> can also happen in malloc'ed structs, or in global variables. An array
>>>> is aligned to the size of it's base member type. But malloc does
>>>> worst-case-allignment, because it happens at run-time without
>>>> type-information.
>>>
>>> I'd be very surprised if malloc() did "worst case alignment" - that'd
>>> suck
>>> pretty badly from performance point of view.
>>
>>  From POSIX (I don't have K&R at hand, but it's also specified there):
>> "The pointer returned if the allocation succeeds shall be suitably
>> aligned so that it may be assigned to a pointer to any type of object
>> and then used to access such an object in the space allocated (until
>> the space is explicitly freed or reallocated)."
>>
>> I put it in quotes because it's not the worst-case alignment you can
>> ever think of, but rather the worst case alignment of your CPUs
>> alignment requirements. This is 4 bytes for most CPUs.
>
> That's just the minimum guarantee! Why do you think modern malloc()
> implementations don't try *very* hard to provide best possible alignment?
>

Yes, it's the minimum alignment requirement. And yes, malloc
implementations try to keep the alignment. I don't think there's any
contradiction between what you and I said.

>>> Stack allocation alignment is a harder issue but I doubt it's as bad as
>>> you
>>> make it out to be. On x86, for example, stack pointer is almost always 8
>>> or
>>> 16 byte aligned with compilers whose writers have spent any time reading
>>> the
>>> Intel optimization manuals.
>>>
>>> So yes, your statements are absolutely correct but I strongly doubt it
>>> matters that much in practice unless you're using a really crappy
>>> compiler...
>>
>> I'm sorry, but the the fact of the matter is that we don't write code
>> for one compiler, we try to please many. Crappy compilers are very
>> much out there in the wild, and we have to deal with it. So, we can't
>> depend on char-arrays being aligned to 32-bytes. This code WILL break
>> on GCC for ARM, so it's not a theoretical issue at all. It will also
>> most likely break on GCC for x86 when optimizations are disabled.
>
> Yes, ARM is a problem and I didn't try to claim otherwise. However, it's not
> "impossible to fix" as you say with memalign().

True, it's not impossible. It's just an insane thing to try to do, for
a very small gain. The important change was the early-out, and we can
get that while still using a platform-optimized memcmp.

> But my comment was mostly about your claim that "we have no guarantee that
> the SHA-1s are aligned on x86 either, and unaligned accesses are slow on
> x86" which only matters in practice if you have a crappy compiler. And
> arguing for performance if you don't have a reasonable compiler is pretty
> uninteresting.

I agree that not aligning arrays when optimizations are disabled isn't
a big problem on x86. But I don't think that assuming that every
reasonable compiler/compiler-setting pair for x86 align all char-array
makes sense. Aligning short arrays on the stack can lead to
sub-optimal caching for local variables, for instance. Alignment isn't
the only thing that matters.

But that point aside, we need an implementation that is both fast and
correct on all platforms; type-punning arrays is not the way to do it.

So my preference is still something like this. Call me conservative ;)

diff --git a/cache.h b/cache.h
index c730c58..8bc03c6 100644
--- a/cache.h
+++ b/cache.h
@@ -681,13 +681,17 @@ extern char *sha1_pack_name(const unsigned char *sha1);
 extern char *sha1_pack_index_name(const unsigned char *sha1);
 extern const char *find_unique_abbrev(const unsigned char *sha1, int);
 extern const unsigned char null_sha1[20];
-static inline int is_null_sha1(const unsigned char *sha1)
+static inline int hashcmp(const unsigned char *sha1, const unsigned char *sha2)
 {
-	return !memcmp(sha1, null_sha1, 20);
+	/* early out for fast mis-match */
+	if (*sha1 != *sha2)
+		return *sha1 - *sha2;
+
+	return memcmp(sha1 + 1, sha2 + 1, 19);
 }
-static inline int hashcmp(const unsigned char *sha1, const unsigned char *sha2)
+static inline int is_null_sha1(const unsigned char *sha1)
 {
-	return memcmp(sha1, sha2, 20);
+	return !hashcmp(sha1, null_sha1);
 }
 static inline void hashcpy(unsigned char *sha_dst, const unsigned
char *sha_src)
 {
--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]