Andreas Ericsson <ae@xxxxxx> wrote: > Output latency will grow with number of tags though, won't it? I was > thinking of the repo which users had reported problems fetching tags > from, as there was more than 2k tags. If my memory serves me correctly, > this report led to the packed-refs stuff. Only on tags which are considered possible matches. What we do now is we walk back along the commit history until we find a commit which has a tag pointing at it. As soon as we find that commit we enqueue its corresponding tag into a list and completely ignore its parents, aborting listing any further back. So we really only pay the bare minimum price here. Unfortunately the tag matching is O(n*m*q) where: n == number of tags, m == number of commits we have to walk back before we find a tag, q == number of possible tags that will match for this lineage. Storing the tags in a hash structure would accelerate this matching, especially for high values of n. m and q we cannot do anything about, except to recommend to projects that they tag more than often then never. > Unless there's some bogosity going on that leads to it finding the 14 > tags in the above case. Yea, its a bogosity in the way the branches are merged together. I haven't dug into what is causing this specifically in git.git but I know the code is doing the best it can to select the smallest set of possible matches. There is however a better algorithm for the revision list generation; I think I can fold it into the main walkback loop and still get the right result. I have to read a stack of papers tonight but I'll see if I can try to code up the algorithm. It would accelerate git-describe right back to where it was before my change. -- Shawn. - To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html