Am 10.03.2014 12:42, schrieb Dennis Luehring: > Am 10.03.2014 12:28, schrieb demerphq: >> I had the impression, and I would not be surprised if they had the >> impression that the git development community is relatively >> unconcerned about performance issues on larger repositories. > > so the question is if the git community is interested in beeing competive in such > large scale scenarios - something what mercurial seems to be now out of the box > The hgwatchman site claims (https://bitbucket.org/facebook/hgwatchman) "On a real-world repository with over 200,000 files, hg status normally takes over 3 seconds. With hgwatchman it takes under 0.6 seconds." There have been a few performance improvements in git status to support such large repositories. I just re-checked git status performance with the WebKit repo (~200k files): Linux (with core.preloadIndex) git status -uall: 0.620s git status -uno : 0.255s Windows (with core.preloadIndex and core.fscache) git status -uall: 1.006s git status -uno : 0.695s Of course, for more reliable benchmark data, you'd have to compare the same repo on the same platform. But on first glance, it seems that mercurial with hgwatchman extension may be as fast as git is out of the box, not the other way around. This comes at the cost of running a background daemon, which may slow down the entire system. E.g. if the daemon activates whenever the compiler creates a .o file, it will probably slow down build performance. Note that hgwatchman doesn't support Windows, so git is probably much faster there. -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html