Re: Resumable clone/Gittorrent (again)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 15/01/11 03:26, Luke Kenneth Casson Leighton wrote:
>>> and change that graph?  are you _certain_ that you can write an
>>> algorithm which is capable of generating exactly the same mapping,
>>> even as more commits are added to the repository being mirrored, or,
>>> does that situation not matter?
>> For a given set of start and end points, and a given sort algorithm,
>> walking the commit tree can yield deterministic results.
>  excellent.  out of curiosity, is it as efficient as git pack-objects
> for the same start and end points?

That isn't a sensible question; walking the revision tree is something
that many commands, including git pack-objects, do internally.

>> Did you look at any of the previous research I linked to before?
>  i've been following this since you first originally started it, sam
> :)  it would have been be nice if it was a completed implementation
> that i could test and see "for real" what you're referring to (above)
> - the fact that it's in perl and has "TODO" at some of the critical
> points, after trying to work with it for several days i stopped and
> went "i'm not getting anywhere with this" and focussed on bittorrent
> "as a black box" instead.
>
>  if i recall, the original gittorrent work that you did (mirror-sync),
> the primary aim was to rely solely and exclusively on a one-to-one
> direct link between one machine and another.  in other words, whilst
> syncing, if that peer went "offline", you're screwed - you have to
> start again.  is that a fair assessment?  please do correct any
> assumptions that i've made.

Ok.  Well, first off - I didn't start gittorrent; that was Jonas
Fonseca, it was his Masters thesis.  Criticism about not having a
completed implementation to work with is therefore shared between him
and people who have come along since such as myself.

I don't know why you got the idea that the protocol is one to one.  It's
one to one just like BitTorrent is - every communication is between two
nodes who share information about what they have and what they need,
before transferring data.  It is supposed to be restartable and it is
not supposed to matter which node data is exchanged with.  In that way,
you could in principle download from multiple nodes at once, or you
could have restartable transfers.  If you lose connectivity then the
most that should have to be re-transferred are incomplete blocks.

>  because on the basis _of_ that assumption, i decided not to proceed
> with mirror-sync, instead to pursue a "cache git pack-objects"
> approach and to use bittorrent "black-box-style".  which i
> demonstrated (minus the cacheing) works perfectly well, several months
> back.

Right, but as others have noted, there are significant drawbacks with
this approach.  For a start, to participate in such a network, you need
to get the particular exact pack that is currently being torrented; just
having a clone is not enough.  This is because the result of git
pack-objects is not repeatable.

That being said for many projects that would be an acceptable compromise
for the advantages of a restartable clone.  That is why I suggest that a
torrent transfer, treated as a mirror which is infrequently updated, may
be a better approach than trying to overly automate everything.

>  as well, after nicolas and others went to all the trouble to explain
> what git pack-objects is, how it works, and how damn efficient it is,
> i'm pretty much convinced that an approach to uniquely identify, then
> pick and cache the *best* git pack-object made [by all the peers
> requested to provide a particular commit range], is the best, most
> efficient - and importantly simplest and easiest to understand -
> approach so far that i've heard.  perhaps that's because i came up
> with it, i dunno :)  but the important thing is that i can _show_ that
> it works (http://gitorious.org/python-libbittorrent/pybtlib - go back
> a few revisions)

That's great.  If you want to continue this simple approach and ignore
the gittorrent/mirror-sync path altogether, that's fine too.

Trying to determine the "best" pack-object may be counter-productive. 
Here's a simple approach which allows the repository owner to easily
arrange for efficient torrenting of essential object files:

Add to the .torrent manifest just these files:

  .git/objects/pack/pack-*.pack - just the files with .keep files
  .git/packed-refs - just the references which are completely available
via the .keep packs

In that way, a repository owner can periodically re-pack their repo,
mark the new pack files as .keep, then re-generate the .torrent file. 
All nodes will just have to transfer the new packs, not everything.

>  so - perhaps it would help if mirrorsync was revived, so that it can
> be used to demonstrate what you mean (there aren't any instructions on
> how to set up mirrorsync, for example).  that would then allow people
> to do a comparative analysis of the approaches being taken.

Ok, that sounds like a good plan - I'll see if I can devote some time to
an explanatory series with working examples with reference to the
existing code etc over the coming months.

Cheers,
Sam
--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]