My current workflow as maintainer of a couple of projects is: I have a publicly-accessible repository, whose master is the important branch. When there are changes that I want to add, I fetch this branch on a workstation to the workstation's "mainline" branch, pull into "for-mainline" (which is a fast-forward), apply patches, test, commit, and push back to master. Usually, I use the same computer to do this multiple times in a row, and it's a bit inconvenient that, after pushing local "for-mainline" to remote "master", I have to fetch remote "master" to local "mainline". It would be nice if git would update local tracking of remote refs when it pushes to those remote refs. (Furthermore, I'm also using another branch to do development, by being a lot less careful about the quality of commits in it, and generating patches out of the diff between the development branch and mainline. But I do development on multiple workstations, so I have a remote branch on my server that I push the development work to, so that I can get it from other computers even before I clean it up and put it in the mainline. When I'm generating patches, I'm doing it between "for-mainline" and the tracking branch for the remote development, which means that I remember to push my development to the server. But then I have to fetch it back immediately so that I can use it to generate patches.) I can't think of a situation in which you would want to not get the effect of an immediate fetch after a successful push, and it saves a couple of ssh connections to use the local knowledge that, at least for an instant, the remote ref matches the local one. Is there some reason not to do this? -Daniel *This .sig left intentionally blank* - To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html