On Tue, Oct 12, 2010 at 10:35 AM, Kevin Grittner <Kevin.Grittner@xxxxxxxxxxxx> wrote: > (1) Heavily used data could be kept fully cached in RAM and not > driven out by transient activity. We've attempted to address this problem by adding logic to prevent the buffer cache from being trashed by vacuums, bulk loads, and sequential scans. It would be interesting to know if anyone has examples of that logic falling over or proving inadequate. > (2) You could flag a cache used for (1) above as using "relaxed LRU > accounting" -- it saved a lot of time tracking repeated references, > leaving more CPU for other purposes. We never do strict LRU accounting. > (3) Each named cache had its own separate set of locks, reducing > contention. We have lock partitions, but as discussed recently on -hackers, they seem to start falling over around 26 cores. We probably need to improve that, but I'd rather do that by making the locking more efficient and by increasing the number of partitions rather than by allowing users to partition the buffer pool by hand. > (4) Large tables for which the heap was often were scanned in its > entirety or for a range on the clustered index could be put in a > relatively small cache with large I/O buffers. This avoided blowing > out the default cache space for situations which almost always > required disk I/O anyway. I think, but am not quite sure, that my answer to point #1 is also relevant here. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance