On Mon, 11 Jun 2007, Alex R.M. Turner wrote: > > Cool - that totally makes sense, HEAD is a link to master. so updating > HEAD failed because it was already up to date. Yes. Well, strictly speaking it failed because it _wasn't_ up-to-date *before* the push (so "git push" thought it should update it), but it had become up-to-date (through the symref link) by the time it was actually its turn to be updated. > The command was simply: > > git push Ok, as Junio points out, it's then just the fact that both repositories had the same "remote" refs, and then the default of just updating everything in common kicks in. That default used to make sense back when, but it doesn't make sense for remotes, since those are generally "local" to each repo. > This repo was cloned from one on another server (the server I use to > backup everything) with a git clone command: Yeah. Normally you'd (well, _I_ would) only push to bare repositories, and normally you wouldn't make those bare repositories have "remotes" entries, which is why you're the first to apparently even notice this insanity. It wasn't your fault, it's simply bad defaults for git behaviour. The behaviour for "git pull" has improved _immensely_ over the last few months, but "git push" still does the same thing it always did, because fewer people care about pushing than pulling, and because the old "git push" behaviour of just updating all the branches in common actually happens to be the right thing when you do *not* make the central repository contain remote branches of its own. > Based on all this, what is the correct way to update my core repo on my > server? (I'm sorry - I'm pretty new to git, so I haven't quite cottoned on > to some aspects yet). With the current git model, I would suggest: - for "central" repositories, use a bare repository, and don't create "remotes" branches in that central repository at all. - for other repositories, don't push into them, just _pull_ into them (because that also knows about updating the working tree etc: pushing is really meant to be done only into bare and central ones that don't actually have any work happening in them, and _cannot_ have any work happening in them because they don't even have a working directory associated with them) That said, I don't think that's necessarily the right answer in the longer run. It's how git people do things, but it's not necessarily the *best* way of doing things. I think the better solution in the longer term is to simply improve how "git push" works: - we should probably do the same kinds of .git/config file entries for pushing as we do for fetching, and just get rid of the old implicit model, and instead have a nice refspec pattern model for what gets pushed instead. I _think_ the refspec cleanup work by Daniel makes this something we can almost already do. Daniel? - we should also likely have some way to specify what happens when you push into a branch that is currently checked out and has a working tree associated with it. This was briefly discussed a few weeks ago, but nobody cared enough, I suspect. anyway, I think the _proper_ thing to do would be to associate each [remote] entry in the config file with a "push" refspec pattern, the way we do for "fetch" already. Linus - To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html