On Sun, Nov 11, 2012 at 05:46:32PM +0100, René Scharfe wrote: > >However, I couldn't reproduce it on Linux : where the windows > >implementations crashes at a ~32000 depth (*not* exactly 32768, mind > >you), on linux it happily went through 100000 commits. I didn't take > >time to look much further, but maybe on my 64 bit Linux VM, the process > >can afford to reserve a much bigger address range for the stack of each > >thread than the 1Mb given to 32 bit processes on windows. > >Jean-Jacques. > > I can reproduce it on Linux (Debian testing amd64) with ulimit -s > 1000 to reduce the stack size from its default value of 8MB. > > After reverting ffc4b8012d9a4f92ef238ff72c0d15e9e1b402ed (tag: speed > up --contains calculation) the test passes even with the smaller > stack, but it makes "git tag --contains" take thrice the time as > before. Right, I am not too surprised. That function replaced the original algorithm with a much faster depth-first recursive one. I haven't looked closely yet at Jean-Jacques' iterative adaptation, but that direction seems like a good fix for now. Ultimately, I have some ideas for doing this in a breadth-first way, which would make it more naturally iterative. It would involve having N bits of storage per commit to check N tags, but it would mean that we could get accurate answers in the face of clock skew (like the merge-base calculation, it would merely get slower in the face of skew). But since I haven't worked on that at all, fixing the depth-first algorithm to be iterative makes sense to me. -Peff -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html