On May 3, 2011, at 5:39 PM, Greg Smith wrote: > I've also seen over a 20:1 speedup over PostgreSQL by using Greenplum's free Community Edition server, in situations where its column store + compression features work well on the data set. That's easiest with an append-only workload, and the data set needs to fit within the constraints where indexes on compressed data are useful. But if you fit the use profile it's good at, you end up with considerable ability to trade-off using more CPU resources to speed up queries. It effectively increases the amount of data that can be cached in RAM by a large multiple, and in the EC2 context (where any access to disk is very slow) it can be quite valuable. FWIW, EnterpriseDB's "InfiniCache" provides the same caching benefit. The way that works is when PG goes to evict a page from shared buffers that page gets compressed and stuffed into a memcache cluster. When PG determines that a given page isn't in shared buffers it will then check that memcache cluster before reading the page from disk. This allows you to cache amounts of data that far exceed the amount of memory you could put in a physical server. -- Jim C. Nasby, Database Architect jim@xxxxxxxxx 512.569.9461 (cell) http://jim.nasby.net -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance