Hi, On Fri, Nov 21, 2008 at 01:28:14AM +0100 or thereabouts, Jean-Luc Herren wrote: > Hi list! > > I'm getting bad performance on 'git status' when I have staged > many changes to big files. For example, consider this: > [snip] > $ time git status > # On branch master > # Changes to be committed: > # (use "git reset HEAD <file>..." to unstage) > # > # modified: 1 > # modified: 10 > ... > # modified: 98 > # modified: 99 > # > > real 0m16.291s > user 0m16.054s > sys 0m0.221s > > The first 'git status' shows the same difference as the second, > just the second time it's staged instead of unstaged. Why does it > take 16 seconds the second time when it's instant the first time? I had similar problems with a repository that contained several tarballs of gcc and the linux kernel(don't ask me why it was not my repository). Some weeks ago I mentioned this on IRC, and the problem really was not necessarily git. The way it was explained to me(and please correct or clairify where I am wrong) is that git asked linux for the status of those files and being that they are so large they were swapped out of memory. The result is the kernel reading those large files back in to see if they have changed at all. My impression is that this is not a git bug but a cache-tuning problem. Dave -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html