On Thu, May 5, 2011 at 1:02 AM, Josh Berkus <josh@xxxxxxxxxxxx> wrote: > >> FWIW, EnterpriseDB's "InfiniCache" provides the same caching benefit. The way that works is when PG goes to evict a page from shared buffers that page gets compressed and stuffed into a memcache cluster. When PG determines that a given page isn't in shared buffers it will then check that memcache cluster before reading the page from disk. This allows you to cache amounts of data that far exceed the amount of memory you could put in a physical server. > > So memcached basically replaces the filesystem? No, it sits in between shared buffers and the filesystem, effectively providing an additional layer of extremely large, compressed cache. Even on a single server there can be benefits over larger shared buffers due to the compression. > That sounds cool, but I'm wondering if it's actually a performance > speedup. Seems like it would only be a benefit for single-row lookups; > any large reads would be a mess. Depends on the database and the workload - if you can fit your entire 100GB database in cache, and your workload is read intensive then the speedups are potentially huge (I've seen benchmarks showing 20x+). Write intensive workloads, less so, similarly if the working set is far larger than your cache size. -- Dave Page Blog: http://pgsnake.blogspot.com Twitter: @pgsnake EnterpriseDB UK: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance