On Wed, Oct 27, 2010 at 09:52:49PM +0200, Pierre C wrote: >> Even if somebody had a >> great idea that would make things smaller without any other penalty, >> which I'm not sure I believe either. > > I'd say that the only things likely to bring an improvement significant > enough to warrant the (quite large) hassle of implementation would be : > > - read-only / archive tables (get rid of row header overhead) > - in-page compression using per-column delta storage for instance (no > random access penalty, but hard to implement, maybe easier for read-only > tables) > - dumb LZO-style compression (license problems, needs parallel > decompressor, random access penalty, hard to implement too) > Different algorithms have been discussed before. A quick search turned up: quicklz - GPL or commercial fastlz - MIT works with BSD okay zippy - Google - no idea about the licensing lzf - BSD-type lzo - GPL or commercial zlib - current algorithm Of these lzf can compress at almost 3.7X of zlib and decompress at 1.7X and fastlz can compress at 3.1X of zlib and decompress at 1.9X. The same comparison put lzo at 3.0X for compression and 1.8X decompress. The block design of lzl/fastlz may be useful to support substring access to toasted data among other ideas that have been floated here in the past. Just keeping the hope alive for faster compression. Cheers, Ken -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance