Re: Git multiple remotes push stop at first failed connection

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sun, May 31, 2020 at 08:28:38PM -0400, John Siu wrote:

> Let say my project has following remotes:
> 
> $ git remote -v
> git.all "server A git url" (fetch)
> git.all "server A git url" (push)
> git.all "server B git url" (push)
> git.all "server C git ur" (push)
> 
> When all serverA/B/C are online, "git push" works.

A slight nomenclature nit, but that's _one_ remote that has several
push urls.

> However "git push" will stop at the first server it failed to connect.
> So if git cannot connect to server A, it will not continue with server
> B/C.
> 
> In the past I have server C turn off from time to time, so failing the
> last push is expected. However recently server A went offline
> completely and we notice git is not pushing to the remaining 2
> remotes.
> 
> Not sure if this is intended behavior or can be improved.

I don't think we've ever documented the error-handling semantics.
Looking at the relevant code in builtin/push.c:do_push():

          url_nr = push_url_of_remote(remote, &url);
          if (url_nr) {
                  for (i = 0; i < url_nr; i++) {
                          struct transport *transport =
                                  transport_get(remote, url[i]);
                          if (flags & TRANSPORT_PUSH_OPTIONS)
                                  transport->push_options = push_options;
                          if (push_with_options(transport, push_refspec, flags))
                                  errs++;
                  }
          } else {
                  struct transport *transport =
                          transport_get(remote, NULL);
                  if (flags & TRANSPORT_PUSH_OPTIONS)
                          transport->push_options = push_options;
                  if (push_with_options(transport, push_refspec, flags))
                          errs++;
          }
          return !!errs;

it does seem to try each one and collect the errors. But the underlying
transport code is so ready to die() on errors, taking down the whole
process, that I suspect it rarely manages to do so. You're probably much
better off defining a separate remote for each push destination, then
running your own shell loop:

  err=0
  for dst in serverA serverB serverC; do
    git push $dst || err=1
  done
  exit $err

There's really no benefit to doing it all in a single Git process, as
we'd connect to each independently, run a separate independent
pack-objects for each, etc.

I'd even suggest that Git implement such a loop itself, as we did for
"git fetch --all", but sadly "push --all" is already taken for a
different meaning (but it might still be worth doing under a different
option name).

-Peff



[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]

  Powered by Linux