Re: Figured out how to get Mozilla into git

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On Sat, 10 Jun 2006, Rogan Dawes wrote:
>
> Here's an idea. How about separating trees and commits from the actual blobs
> (e.g. in separate packs)? My reasoning is that the commits and trees should
> only be a small portion of the overall repository size, and should not be that
> expensive to transfer. (Of course, this is only a guess, and needs some
> numbers to back it up.)

The trees in particular are actually a pretty big part of the history. 

More importantly, the blobs compress horribly badly in the absense of 
history - a _lot_ of the compression in git packing comes very much from 
the fact that we do a good job at delta-compression.

So if you get all of the commit/tree history, but none of the blob 
history, you're actually not going to win that much space. As already 
discussed, the _whole_ history packed with git is usually not insanely 
bigger than just the whole unpacked tree (with no history at all).

So you'd think that getting just the top version of the tree would be a 
much bigger space-saving that it actually is. If you _also_ get all the 
tree and commit objects, the space saving is even less.

I actually suspect that the most realistic way to handle this is to use 
the "fetch.c" logic (ie the incremental fetcher used by http), and add 
some mode to the git daemon where you fetch literally one object at a time 
(ie this would be totally _separate_ from the pack-file thing: you'd not 
ask for "git-upload-pack", you'd ask for something like 
"git-serve-objects" instead).

The fetch.c logic really does allow for on-demand object fetching, and is 
thus much more suitable for incomplete repositories.

HOWEVER. The fetch.c logic - by necessity - works on a object-by-object 
level. That means that you'd get no delta compression AT ALL, and I 
suspect that the downside of that would be a factor of ten expansion or 
more, which means that it would really not work that well in practice.

It might be worth testing, though. It would work fine for the "after I 
have the initial cauterized tree, fetch small incremental updates" case. 
The operative word here being "small" and "incremental", because I'm 
pretty sure it really would suck for the case of a big fetch.

But it would be _simple_, which is why it's worth trying out. It also has 
the advantage that it would solve the "I had data corruption on my disk, 
and lost 100 objects, but all the the rest is fine" issue. Again, that's 
not something that the efficient packing protocol handles, exactly because 
it assumes full history, and uses that to do all its optimizations.

		Linus
-
: send the line "unsubscribe git" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]