Jon Smirl <jonsmirl@xxxxxxxxx> wrote: > From what I remember from long ago most compression schemes build > dictionaries as a way of achieving significant compression. If so, > since we zlib compress each entry in a pack individually, are there > many copies of very similar dictionaries in the pack? Yes, possibly. Every object in the pack has its own dictionary. But I'm not sure if there would be any savings from sharing dictionaries. One problem is you probably don't want a single massive dictionary for the entire pack as it could be very large, plus updating it with additions would likely require recompressing every entry. Typically once an entry in the pack has been compressed GIT won't recompress it. However whenever possible deltas get used between objects. This allows an object to copy content from another object, with copy commands typically taking just a couple of bytes to copy a whole range of bytes from the other object. This works pretty well when the current revision of a file is stored with just zlib compression and older revisions copy their content from the current revision using the delta format. I should note that delta compression works on trees, commits and tags too, however it gets the most benefit out of trees when only a fraction of the files in the tree are modified. Commits and tags are harder to delta as they tend to be mostly different. My fast-import computes deltas in the order you are feeding it objects, so each blob is deltafied against the prior object. Since you are feeding them in reverse RCS order (newest to oldest) you are probably getting a reasonably good delta compression. -- Shawn. - To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html