Re: Compression and dictionaries

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 8/13/06, Shawn Pearce <spearce@xxxxxxxxxxx> wrote:
Jon Smirl <jonsmirl@xxxxxxxxx> wrote:
> From what I remember from long ago most compression schemes build
> dictionaries as a way of achieving significant compression. If so,
> since we zlib compress each entry in a pack individually, are there
> many copies of very similar dictionaries in the pack?

Yes, possibly.  Every object in the pack has its own dictionary.

But I'm not sure if there would be any savings from sharing
dictionaries.  One problem is you probably don't want a single
massive dictionary for the entire pack as it could be very large,
plus updating it with additions would likely require recompressing
every entry.  Typically once an entry in the pack has been compressed
GIT won't recompress it.

The zlib doc says to put your most common strings into the fixed
dictionary. If a string isn't in the fixed dictionary it will get
handled with an internal dictionary entry.  By default zlib runs with
an empty fixed dictionary and handles everything with the internal
dictionary.

Since we are encoding C many strings will always be present (if,
static, define, const, char, include, int, void, while, continue,
etc).  Do you have any tools to identify the top 500 strings in C
code? The fixed dictionary would get hardcoded into the git apps.

A fixed dictionary could conceivably take 5-10% off the size of each entry.

However whenever possible deltas get used between objects.
This allows an object to copy content from another object, with
copy commands typically taking just a couple of bytes to copy a
whole range of bytes from the other object.  This works pretty
well when the current revision of a file is stored with just zlib
compression and older revisions copy their content from the current
revision using the delta format.

I should note that delta compression works on trees, commits and
tags too, however it gets the most benefit out of trees when only
a fraction of the files in the tree are modified.  Commits and tags
are harder to delta as they tend to be mostly different.

My fast-import computes deltas in the order you are feeding
it objects, so each blob is deltafied against the prior object.
Since you are feeding them in reverse RCS order (newest to oldest)
you are probably getting a reasonably good delta compression.

--
Shawn.



--
Jon Smirl
jonsmirl@xxxxxxxxx
-
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]