On Wed, Jan 24 2018, Jeff King jotted: > On Wed, Jan 24, 2018 at 11:03:47PM +0100, Ævar Arnfjörð Bjarmason wrote: > >> This produces a total of 0 blocks that are the same. If after the repack >> we throw this in there after the repack: >> >> echo 5be1f00a9a | git pack-objects --no-reuse-delta --no-reuse-object --revs .git/objects/pack/manual >> >> Just over 8% of the blocks are the same, and of course this pack >> entirely duplicates the existing packs, and I don't know how to coerce >> repack/pack-objects into keeping this manual-* pack and re-packing the >> rest, removing any objects that exist in the manual-* pack. > > I think touching manual-*.keep would do what you want (followed by > "repack -ad" to drop the duplicate objects). Thanks, that got the number of identical blocks just north of 15%... > You may also want to use "--threads=1" to avoid non-determinism in the > generated packs. In theory, both repos would then produce identical base > packs, though it does not seem to do so in practice (I didn't dig in to > what the different may be). ..and north of 20% with --threads=1. >> I couldn't find any references to someone trying to get this particular >> use-case working on-list. I.e. to pack different repositories with a >> shared history in such a way as to optimize for getting the most amount >> of identical blocks within packs. > > I don't recall any discussion on this topic before. > > I think you're fighting against two things here: > > - the order in which we find deltas; obviously a delta of A against B > is quite different than B against A > > - the order of objects written to disk > > Those mostly work backwards through the history graph, so adding new > history on top of old will cause changes at the beginning of the file, > and "shift" the rest so that the blocks don't match. > > If you reverse the order of those, then the shared history is more > likely to provide a common start to the pack. See compute_write_order() > and the final line of type_size_sort(). I'll have to poke at what compute_write_order() is doing, but FWIW this to type_size_sort() got shared blocks down to 3%: diff --git a/builtin/pack-objects.c b/builtin/pack-objects.c index 81ad914cfc..c9ada1bd1c 100644 --- a/builtin/pack-objects.c +++ b/builtin/pack-objects.c @@ -1764,7 +1764,7 @@ static int type_size_sort(const void *_a, const void *_b) return -1; if (a->size < b->size) return 1; - return a < b ? -1 : (a > b); /* newest first */ + return b < a ? -1 : (b > a); /* newest first */ } struct unpacked { >> It should be possible to produce such a pack, e.g. by having a repack >> mode that would say: >> >> 1. Find what the main branch is >> 2. Get its commits in reverse order, produce packs of some chunk-size >> of commit batches. >> 3. Pack all the remaining content >> >> This would delta much less efficiently, but as noted above the >> block-level deduplication might make up for it, and in any case some >> might want to use less disk space. > > We do something a bit like this at GitHub. There we have a single pack > holding all of the objects for many forks. So the deduplication is done > already, but we want to avoid deltas that cross fork boundaries (since > they mean throwing away the delta and recomputing from scratch when > somebody fetches). And then we write the result in layers, although > right now there are only 2 layers (some "base" fork gets all of its > objects, and then everybody else's objects are dumped on top). > > I suspect some of the same concepts could be applied. If you're > interested in playing with it, I happened to extract it into a single > patch recently (it's on my list of "stuff to send upstream" but I > haven't gotten around to polishing it fully). It's the > "jk/delta-islands" branch of https://github.com/peff/git (which I happen > to know you already have a clone of ;) ). Thanks. I'll look into that, although the above results (sans hacking on the core pack-objects logic) suggest that even once I create an island I'm getting at most 20%.