Another idea that probably is silly in some way too: couldn't after the
first error, git automatically start over and do this whole --depth=1
followed by --deepen... automatically? I feel like anything that
wouldn't require knowing and manually doing that process would be an
improvement for people affected often by this.
Regards,
Ellie
On 6/8/24 11:40 AM, ellie wrote:
Sorry if I'm misunderstanding, and I assume this is a naive suggestion
that may not work in some way: but couldn't git somehow retain all the
objects it already has fully downloaded cached? And then otherwise start
over cleanly (and automatically), but just get the objects it already
has from the local cache? In practice, that might already be enough to
get through a longer clone despite occasional hiccups.
Sorry, I'm really not qualified to make good suggestions, it's just that
the current situation feels frustrating as an outside user.
Regards,
Ellie
On 6/8/24 10:43 AM, Jeff King wrote:
On Sat, Jun 08, 2024 at 02:46:38AM +0200, ellie wrote:
The deepening worked perfectly, thank you so much! I hope a resume will
still be considered however, if even just to help out newcomers.
Because the packfile to send the user is created on the fly, making a
clone fully resumable is tricky (a second clone may get an equivalent
but slightly different pack due to new objects entering the repo, or
even raciness between threads).
One strategy people have worked on is for servers to point clients at
static packfiles (which _do_ remain byte-for-byte identical, and can be
resumed) to get some of the objects. But it requires some scheme on the
server side to decide when and how to create those packfiles. So while
there is support inside Git itself for this idea (both on the server and
client side), I don't know of any servers where it is in active use.
-Peff