Re: Incremental object transfer

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Enrico Weigelt <weigelt@xxxxxxxx> writes:

> As already said some time ago, I'm using git to back up maildirs
> on a machine w/ relatively low ram. The biggest problem for now
> is the initial push (maybe later larger subsequent pushes could
> be also affected too): it takes quite a long time to get everything
> packed, and if the connection breaks (the box is sitting behind
> a dynamic-IP DSL link), everything has to be restarted :(
> 
> So my idea is to incrementally transfer objects in smaller packs,
> disable gc on remote side and update refs when some commit is
> complete.
> 
> Is there any way to do this ?

This would work only for "dumb" transports: "dumb" HTTP transport and
deprecated rsync transport.

"Smart" transport (e.g. git://, SSH, new "smart" HTTP transport) all
create packs to send on the fly.  Those packs would rarely be
byte-for-byte the same, even if both server and client have the same
objects, unless perhaps on single code (unthreaded).

There were discussion about resumable download, by examining what got
downloaded in partial pack; but this is a hard problem.

So I would recommend either creating a bundle (see git-bundle
manpage), which can be resumably downloaded via HTTP, FTP, P2P - it is
an ordinary file.  Or you can try cloning via rsync (but the
repository should be quiescent).

-- 
Jakub Narebski
Poland
ShadeHawk on #git
--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]