Re: git clone algorithm

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Oct 9, 2012 at 10:53 PM, Bogdan Cristea <cristeab@xxxxxxxxx> wrote:
> I have already posted this message on git-users@xxxxxxxxxxxxxxxx but I have been
> advised to rather use this list. I know that there is a related thread
> (http://thread.gmane.org/gmane.comp.version-control.git/207257), but I don't
> think that this provides an answer to my question (me too I am on a slow 3G
> connection :))
>
> I am wondering what algorithm is used by git clone command ?
> When cloning from remote repositories, if there is a link failure and
> the same command is issued again, the process should be smart enough
> to figure out what objects have been already transferred locally and
> restart the cloning process from the point it has been interrupted.
> As far as I can tell this is not the case, each time I have restarted
> the cloning process everything started from the beginning. This is
> extremely annoying with slow, unreliable connections. Are there any
> ways to cope with this situation or any future plans ?

This is not an answer to your question in the general case, sorry...

Admins who are managing a site using gitolite can set it up to
automatically create and maintain "bundle" files, and allow them to be
downloaded using rsync (which, as everyone knows, is resumable), using
the same authentication and access rules as gitolite itself.  Once you
add a couple of lines to the gitolite.conf, it's all pretty much
self-maintaining.
--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]