Julian Phillips <julian@xxxxxxxxxxxxxxxxx> writes: > While trying out git on a large repository (10000s of commits, 1000s > of branches, ~2.5Gb when packed) at work I noticed that doing a pull > was taking a long time (longer than I was prepared to wait anyway). Are they all real branches? In other words, does your project have 1000s of active parallel development? > So what I would like to know is: is there any way to make a pull/fetch > with no options default to only fetching the current branch? (other > than scripting "git pull/fetch origin $(git symbolic-ref HEAD)" that > is) Also, assuming the answer to the above question is yes, will you have 1000s of branches on your end and will work on any one of them? The default configuration created by git-clone makes you track all branches from the remote side by putting: remote.origin.fetch = +refs/heads/*:refs/remotes/origin/* If you do not care all 1000s branches but only are interested in selected few, you could change that configuration to suit your needs better. remote.origin.fetch = +refs/heads/stable:refs/remotes/origin/stable remote.origin.fetch = +refs/heads/testing:refs/remotes/origin/testing remote.origin.fetch = +refs/heads/unstable:refs/remotes/origin/unstable I suspect most of the time is being spent in the append-fetch-head loop in fetch_main shell function in git-fetch.sh The true fix would not be to limit the number of branches updated, but to speed that part of the code up. - To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html