On Sat, 6 Sep 2008, Jon Smirl wrote: > > Some alternative algorithms are here... > http://cs.fit.edu/~mmahoney/compression > It is possible to beat zlib by 2x at the cost of CPU time and memory. Jon, you're missing the point. The problem with zlib isn't that it doesn't compress well. It's that it's too _SLOW_. > Turning a 500MB packfile into a 250MB has lots of advantages in IO > reduction so it is worth some CPU/memory to create it. ..and secondly, there's no way you'll find a compressor that comes even close to being twice as good. 10% better yes - but then generally much MUCH slower. Take a look at that web page you quote, and then sort things by decompression speed. THAT is the issue. And no, LZO isn't even on that list. I haven't tested it, but looking at the code, I do think LZO can be fast exactly because it seems to be byte-based rather than bit-based, so I'd not be surprised if the claims for its uncompression speed are true. The constant bit-shifting/masking/extraction kills zlib performance (and please realize that zlib is at the TOP of the list when looking at the thing you pointed to - that silly site seems to not care about compressor speed at all, _only_ about size). So "kills" is a relative measure, but really - we're looking for _faster_ algorithms, not slower ones! Linus -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html