On Thu, Jun 1, 2017 at 5:48 AM, Lars Schneider <larsxschneider@xxxxxxxxx> wrote: > Hi, > > we occasionally see "The remote end hung up unexpectedly" (pkt-line.c:265) > on our `git fetch` calls (most noticeably in our automations). I expect > random network glitches to be the cause. There is 665b35eccd (submodule--helper: initial clone learns retry logic, 2016-06-09) but that is for submodules and only the initial clone. I tried searching the mailing list archive if it was discussed for fetch before (I am sure it was), but could not find a good hint to link at. IIRC one major concern was: * When a human operates git-fetch, then they want to have fast feedback. The failure may be non-transient, for example when I forgot to up the wifi connection. Then the human can inspect and fix the root cause. (Assumption in human workflow: these non transient errors happen more often than the occasional fetch error due to network glitches.) For automation I would expect that the retry logic is actually beneficial, such that you would want to have a command line options such as "git fetch --retries=5 --delay-between-retries=10s". > > In some places we added a basic retry mechanism and I was wondering > if this could be a useful feature for Git itself. There are already retries in other places. :) Cf. f4ab4f3ab1 (lock_packed_refs(): allow retries when acquiring the packed-refs lock, 2015-05-11), which solves the need of github on the serverside, when they have a very active repo that multiple people push to at the same time. (to different branches. I believe that forks are internally handled as the same repo, just with different namespaces. So if there are 1000 forks of linux.git you see a lot of pushes to the "same" repo) > > E.g. a Git config such as "fetch.retryCount" or something. > Or is there something like this in Git already and I missed it? I like it. Thanks, Stefan