On 11.03.2011 14:48, Nguyen Thai Ngoc Duy wrote:
On Fri, Mar 11, 2011 at 7:52 PM, Ilari Liusvaara
<ilari.liusvaara@xxxxxxxxxxx> wrote:
On Fri, Mar 11, 2011 at 01:18:45PM +0100, Alexander Miseler wrote:
Resumable clone
This is very very hard. Not so much to implement but to design the way it
can be done without assuming things (like object sort orders) that aren't
stable.
Yes it's hard. I have some experimental thing that nearly works [1],
although whether it is an acceptable approach is to be seen. If
anyone's interested, I'll post it some time
A simpler way to restartable clone is to facilitate bundles (Nicolas'
idea). Some glue is needed to teach git-fetch/git-daemon to use the
bundles, and git-push to automatically create bundles periodically (or
a new command that can be run from cron). I think this way fit in GSoC
scope better.
[1] The idea of my work above was mentioned elsewhere, history is cut
down by path. Each file/dir's history a very long chain of deltas. We
can stream deltas (in parallel if needed) over the wire, resuming
where the chain stops last time.
There are many problems. One is that a deep chain can make git run out
of stack, so chains have to be broken down before storing (not done).
Another one is that not many deltas can be reused, so it will consume
more power than normal clone. But once you clone this way, the cloned
repo have lots of delta suitable for another clone (but probably not
for anything else).
This may all be aiming to short. IMHO the best solution would be some generic way for the client to specify exactly what
it wants to get and to get just that. This would lay the groundwork for:
- lazy clones
- sparse clones
- resumable cloning
- resumable fetching
and probably quite a few other nifty tricks.
I guess that would be far beyond the scope of a SoC project though.
--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html