Re: With big repos and slower connections, git clone can be hard to work with

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Sorry if I'm misunderstanding, and I assume this is a naive suggestion that may not work in some way: but couldn't git somehow retain all the objects it already has fully downloaded cached? And then otherwise start over cleanly (and automatically), but just get the objects it already has from the local cache? In practice, that might already be enough to get through a longer clone despite occasional hiccups.

Sorry, I'm really not qualified to make good suggestions, it's just that the current situation feels frustrating as an outside user.

Regards,

Ellie

On 6/8/24 10:43 AM, Jeff King wrote:
On Sat, Jun 08, 2024 at 02:46:38AM +0200, ellie wrote:

The deepening worked perfectly, thank you so much! I hope a resume will
still be considered however, if even just to help out newcomers.

Because the packfile to send the user is created on the fly, making a
clone fully resumable is tricky (a second clone may get an equivalent
but slightly different pack due to new objects entering the repo, or
even raciness between threads).

One strategy people have worked on is for servers to point clients at
static packfiles (which _do_ remain byte-for-byte identical, and can be
resumed) to get some of the objects. But it requires some scheme on the
server side to decide when and how to create those packfiles. So while
there is support inside Git itself for this idea (both on the server and
client side), I don't know of any servers where it is in active use.

-Peff




[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]

  Powered by Linux