---------- Forwarded message ---------- From: Mark Adler <madler@xxxxxxxxxxxxxxxxxx> Date: Aug 15, 2006 10:43 AM Subject: Re: Git SCM and zlib dictionaries To: Jon Smirl <jonsmirl@xxxxxxxxx> Cc: Jean-loup Gailly <jloup@xxxxxxxx> On Aug 15, 2006, at 6:11 AM, Jon Smirl wrote:
What we are doing is similar to full-text search indexing.
If the point of very small (1Kish) compressed chunks is for random access and individual decompression of those pieces, then there are other approaches. You can for example compress many of them together for better compression (say 32), and accept some speed degradation by having to decompress on average half (16) of them to get to the one you want. ------------------------------ We have delta runs of about 20 revsisions, compress those 20 blobs as a group instead of individually. The pack index would point all 20 sha1's to the same blob with a different type code. You had to load and unzip most of these objects anyway to compute the revision off from the diffs. Putting them into a single zip means that they share a single compression table. ------------------------------- Or you can process the whole thing to create a custom coding scheme, as illustrated in "Managing Gigabytes": http://www.cs.mu.oz.au/mg/ mark -- Jon Smirl jonsmirl@xxxxxxxxx - To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html