Michal Suchánek <msuchanek@xxxxxxx> wrote: > On Fri, Oct 07, 2022 at 08:44:09AM +0700, Bagas Sanjaya wrote: > > On 10/7/22 01:01, m wrote: > > > In my country government make connections unstable on purpose. Please add resume capability for commands like git clone > > > > > > > Bandwidth issue? > > > Bandwidth is one thing but the other thing is that git network > operations require that the whole operation succeeds in one go. > > If your connectivity is bad to the point that the TCP connection breaks > you have downloaded a bunch of data that is AFAIK just thrown away when > you retry. > > It is difficult to know if that data would be useful in the future, and > you cannot meaningfully 'resume' because the remote state might have > changed in the meantine as well. > > Further, this whole fetch operation is using a heuristic to fetch some > data in the hope that it will be enough to reconstruct the history that > is requested, and this has been wrong in some cases, too. Not very > precise and reproducible hence hard to 'resume' as well. > > Let's say that the git networking has been developed at and tuned for > the 'first world' Internet, and may be problematic to use in net-wise > backwater areas. And it would require non-trivial effort to change. Increased adoption of bundles would help, since `wget -c' and such would work nicely, but that puts the burden on hosts for extra storage. Perhaps GIT_SMART_HTTP=0 and having dumb clones not throwaway incomplete xfers would be more transparent to hosters, but dumb HTTP tends to be slow even on good connections.