Re: Subject: [PATCH] git-merge-pack

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, 6 Sep 2007, Junio C Hamano wrote:

> This is a beginning of "git-merge-pack" that combines smaller
> packs into one.  Currently it does not actually create a new
> pack, but pretends that it is a (dumb) "git-rev-list --objects"
> that lists the objects in the affected packs.  You have to pipe
> its output to "git-pack-objects".
> 
> The command reads names of pack-*.pack files from the standard
> input, outputs the objects' names in the order they are stored
> in the original packs (i.e. the offset order).  This sorting is
> done in order to emulate the traversal order the original
> "git-rev-list --objects" that was used to create the existing
> pack listed the objects.
> 
> While this approach would give the resulting packfile very
> similar locality of access as the original, it does not give the
> "name" component you would see in "git-rev-list --objects"
> output.  This information is used as the clustering cue while
> computing delta, and the lack of it means you can get horrible
> delta selection.  You do _not_ want to run the downstream
> "git-pack-objects" without the optimization/heuristics to reuse
> delta.  IOW, do not run it with --no-reuse-delta.

I wonder if this is the best way to go.  In the context of a really fast 
repack happening automatically after (or during) user interactive 
operations, the above seems a bit heavyweight and slow to me.

I would have concatenated all packs provided on the command line into a 
single one, simply by reading data from existing packs and writing it 
back without any processing at all.  The offset for OBJ_OFS_DELTA is 
relative so a simple concatenation will just work.

Then the index for that pack can be created just as easily by reading 
existing pack index files and storing the data into an array of struct 
pack_idx_entry, adding the appropriate offset to object offsets, then 
call write_idx_file().

All data is read once and written once making it no more costly than a 
simple file copy.  On the flip side it wouldn't get rid of duplicated 
objects (I don't know if that matters i.e. if something might break with 
the same object twice in a pack).

> To consolidate all packs that are smaller than a megabytes into
> one, you would use it in its current form like this:
> 
>     $ old=$(find .git/objects/pack -type f -name '*.pack' -size 1M)
>     $ new=$(echo "$old" | git merge-pack | git pack-objects pack)
>     $ for p in $old; do rm -f $p ${p%.pack}.idx; done
>     $ for s in pack idx; do mv pack-$new.$s .git/objects/pack/; done

You might want to move the new pack before removing the old ones though.

> An obvious next steps that can be done in parallel by interested
> parties would be:
> 
>  (1) come up with a way to give "name" aka "clustering cue" (I
>      think this is very hard);

It is, and IMHO not worth it.  If you do it separately from the usual 
pack-objects process you'll perform extra IO and decompression when 
walking tree objects just to reconstruct those paths, becoming really 
slow by the context definition I provided above.

If you really want to do it then the best way might simply to reverse 
your find result above, in order to use pack-objects as if the larger 
packs, i.e. the ones that you don't want to merge, simply had an 
associated .keep file.

In fact, since we want to _also_ perform a repack of loose objects in 
the context of automatic repacking, I wonder why we wouldn't use that 
--unpacked= argument to also repack smallish packs at the same time in 
only one pack-objects pass.  Or maybe I'm missing something?


Nicolas
-
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]

  Powered by Linux