Dear git team,
I'm terribly sorry if this is the wrong place, but I'd like to suggest a
potential issue with "git clone".
The problem is that any sort of interruption or connection issue, no
matter how brief, causes the clone to stop and leave nothing behind:
$ git clone https://github.com/Nheko-Reborn/nheko
Cloning into 'nheko'...
remote: Enumerating objects: 43991, done.
remote: Counting objects: 100% (6535/6535), done.
remote: Compressing objects: 100% (1449/1449), done.
error: RPC failed; curl 92 HTTP/2 stream 5 was not closed cleanly:
CANCEL (err 8)
error: 2771 bytes of body are still expected
fetch-pack: unexpected disconnect while reading sideband packet
fatal: early EOF
fatal: fetch-pack: invalid index-pack output
$ cd nheko
bash: cd: nheko: No such file or director
In my experience, this can be really impactful with 1. big repositories
and 2. unreliable internet - which I would argue isn't unheard of! E.g.
a developer may work via mobile connection on a business trip. The
result can even be that a repository is uncloneable for some users!
This has left me in the absurd situation where I was able to download a
tarball via HTTPS from the git hoster just fine, even way larger binary
release items, thanks to the browser's HTTPS resume. And yet a simple
git clone of the same project failed repeatedly.
My deepest apologies if I missed an option to fix or address this. But
summed up, please consider making git clone recover from hiccups.
Regards,
Ellie
PS: I've seen git hosters have apparent proxy bugs, like timing out
slower git clone connections from the server side even if the transfer
is ongoing. A git auto-resume would reduce the impact of that, too.