Re: remove_duplicates() in builtin/fetch-pack.c is O(N^2)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Monday, May 21, 2012 02:13:33 am Michael Haggerty wrote:
> I just noticed that the remove_duplicates() function in
> builtin/fetch-pack.c is O(N^2) in the number of heads. 
> Empirically, this function takes on the order of 25
> seconds to process 100k references.
> 
> I know that 100k heads is kindof absurd.  Perhaps
> handling this many heads is unrealistic for other
> reasons.  But I vaguely recall numbers like this being
> mentioned on the mailing list.

Yes I have mentioned 100K several times, and I greatly 
appreciate the many fixes already made to make git to better 
handle large ref counts.

However, I would like to suggest that 100K not really be 
viewed as absurd anymore. :) There are many users for whom 
it is not absurd, certainly not if you are including tags.  
But, I know that some of the tag uses have been brushed off 
as abuses, so I will focus on feature branches, of which we 
actually have more than tags in our larger repos, we have 
around 60K in our kernel repo.

Of course, we use Gerrit, so features tend to be called 
changes and each change may get many revisions (patchsets), 
so all of these get refs, but I think that it might be wrong 
to consider that out of the ordinary anymore.  After all, 
should a version control system such as git not support 100K 
revisions of features developed independently on separate 
branches (within Gerrit or not)?  100K is not really that 
many when you consider a large project.  Even without 
Gerrit, if someone wanted to track that many features 
(likely over a few years), they will probably use up tons of 
refs.  

It may be too easy to think that because git is distributed 
that features will get developed in a distributed way and 
therefor no one would ever want to track them all in one 
place, but I suspect that this may be a bad assumption.  
That being said, I believe that we are not even tracking 
external features, and we have over 60K feature revisions 
(gerrit patchsets) in one rep), so if someone were to track 
all the changes for the kernel, I can only imagine that this 
number would be in the millions.

I am sure that 1M refs is even within the sights of some 
individuals already, I know users who actually have 250K.  I 
hope that 10M or even perhaps 100M refs will be considered 
feasible to use long term with git.  

Again, I really hope that this will no longer be seen as 
absurd, but rather just normal for large projects.  After 
all the kernel was (still is?) the primary use case of git.  
Our largest ref project is the kernel so I don't know that 
we should be considered fringe, and I hope that we along 
with other larger kernel contributors will be considered 
normal to git, :)

-Martin

-- 
Employee of Qualcomm Innovation Center, Inc. which is a 
member of Code Aurora Forum
--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]