Re: RFC: Would a config fetch.retryCount make sense?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On 6/1/2017 8:48 AM, Lars Schneider wrote:
Hi,

we occasionally see "The remote end hung up unexpectedly" (pkt-line.c:265)
on our `git fetch` calls (most noticeably in our automations). I expect
random network glitches to be the cause.

In some places we added a basic retry mechanism and I was wondering
if this could be a useful feature for Git itself.


Having a configurable retry mechanism makes sense especially if it allows continuing an in-progress download rather than aborting and trying over. I would make it off by default so that any existing higher level retry mechanism doesn't trigger a retry storm if the problem isn't a transient network glitch.

Internally we use a tool (https://github.com/Microsoft/GVFS/tree/master/GVFS/FastFetch) to perform fetch for our build machines. It has several advantages including retries when downloading pack files.

It's biggest advantage is that it uses multiple threads to parallelize the entire fetch and checkout operation from end to end (ie the download happens in parallel as well as checkout happening in parallel with the download) which makes it take a fraction of the overall time.

When time permits, I hope to bring some of these enhancements over into git itself.

E.g. a Git config such as "fetch.retryCount" or something.
Or is there something like this in Git already and I missed it?

Thanks,
Lars




[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]