Re: Continue git clone after interruption

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Aug 19, 2009 at 12:15 AM, Jakub Narebski<jnareb@xxxxxxxxx> wrote:
> There is another way which we can go to implement resumable clone.
> Let's git first try to clone whole repository (single pack; BTW what
> happens if this pack is larger than file size limit for given
> filesystem?).  If it fails, client ask first for first half of of
> repository (half as in bisect, but it is server that has to calculate
> it).  If it downloads, it will ask server for the rest of repository.
> If it fails, it would reduce size in half again, and ask about 1/4 of
> repository in packfile first.

How about an extension where the user can *ask* for a clone of a
particular HEAD to be sent to him as a git bundle?  Or particular
revisions (say once a week) were kept as a single file git-bundle,
made available over HTTP -- easily restartable with byte-range -- and
anyone who has bandwidth problems first gets that, then changes the
origin remote URL and does a "pull" to get uptodate?

I've done this manually a few times when sneakernet bandwidth was
better than the normal kind, heh, but it seems to me the lowest impact
solution.

Yes you'd need some extra space on the server, but you keep only one
bundle, and maybe replace it every week by cron.  Should work fine
right now, as is, with a wee bit of manual work by the user, and a
quick cron entry on the server
--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]