Re: Continue git clone after interruption

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sat, 2009-08-22 at 04:13 -0400, Nicolas Pitre wrote:
> > Ok, but right now there's no way to specify that you want a thin pack,
> > where the allowable base objects are *newer* than the commit range you
> > wish to include.
> 
> Sure you can.  Try this:
> 
> 	( echo "-$(git rev-parse v1.6.4)"; \
> 	  git rev-list --objects v1.6.2..v1.6.3 ) | \
> 		git pack-objects --progress --stdout > foo.pack
> 
> That'll give you a thin pack for the _new_ objects that _appeared_ 
> between v1.6.2 and v1.6.3, but which external delta base objects are 
> found in v1.6.4.

Aha.  I guess I had made an assumption about where that '-' lets
pack-objects find deltas from that aren't true.

> > What I said in my other e-mail where I showed how well it works taking
> > a given bundle, and slicing it into a series of thin packs, was that it
> > seems to add a bit of extra size to the resultant packs - best I got for
> > slicing up the entire git.git run was about 20%.  If this can be
> > reduced to under 10% (say), then sending bundle slices would be quite
> > reasonable by default for the benefit of making large fetches
> > restartable, or even spreadable across multiple mirrors.
> 
> In theory you could have about no overhead.  That all depends how you 
> slice the pack.  If you want a pack to contain a fixed number of commits 
> (such that all objects introduced by a given commit are all in the same 
> pack) then you are of course putting a constraint on the possible delta 
> matches and compression result might be suboptimal.  In comparison, with 
> a single big pack a given blob can delta against a blob from a 
> completely distant commit in the history graph if that provides a better 
> compression ratio.
 [...]
> If you were envisioning _clients_ à la BitTorrent putting up pack slices 
> instead, then in that case the slices have to be well defined entities, 
> like packs containing objects for known range of commits, but then we're 
> back to the delta inefficiency I mentioned above.

I'll do some more experiments to try to quantify this in light of this
new information; I still think that if the overhead is marginal there
are significant wins to this approach.

> And again this might 
> work only if a lot of people are interested in the same repository at 
> the same time, and of course most people have no big insentive to "seed" 
> once they got their copy. So I'm not sure if that might work that well 
> in practice.

Throw away terms like "seeding" and replace with "mirroring".  Sites
which currently house mirrors could potentially be helping serve git
repos, too.  Popular projects could have many mirrors and on the edges
of the internet, git servers could mirror many projects for users in
their country.

Sam

--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]