Re: git-index-pack really does suck..

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On Tue, 3 Apr 2007, Nicolas Pitre wrote:
> 
> Are hard numbers like 7% overhead (because right now that's all we have) 
> really worth it against bad _perceptions_?

If it actually stays at just 7% even with large repos (and the numbers 
from Chris seem to say that it doesn't get worse - in fact, it may be that 
the lookup gets relatively more efficient for a large repo thanks to the 
log(n) costs), I agree that 7% probably isn't worth worrying about when 
weighed against "guaranteed no SHA1 collision". Especially as long as 
you'd normally only hit it when your real performance issue is going to be 
the network.

So especially if we can make sure that the *local* case is ok, where the 
network isn't going to be the bottleneck, I think we can/should do the 
paranoia.

That's especially true as it is also the local case where the 7% has 
already been shown to be just the best case, with the worst case being 
many hundred percent (and memory use going up from 55M to 280M in one 
example), thanks to us actually *finding* the objects.

			Linus
-
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]