Greg Smith wrote:
On Wed, 28 May 2008, Josh Berkus wrote:
shared_buffers: according to witnesses, Greg Smith presented at East
that
based on PostgreSQL's buffer algorithms, buffers above 2GB would not
really receive significant use. However, Jignesh Shah has tested
that on
workloads with large numbers of connections, allocating up to 10GB
improves performance.
Lies! The only upper-limit for non-Windows platforms I mentioned was
suggesting those recent tests at Sun showed a practical limit in the
low multi-GB range.
I've run with 4GB usefully for one of the multi-TB systems I manage,
the main index on the most frequently used table is 420GB and anything
I can do to keep the most popular parts of that pegged in memory seems
to help. I haven't tried to isolate the exact improvement going from
2GB to 4GB with benchmarks though.
Yep its always the index that seems to benefit with high cache hits.. In
one of the recent tests what I end up doing is writing a select
count(*) from trade where t_id >= $1 and t_id < SOMEMAX just to kick in
index scan and get it in memory first. So higher the bufferpool better
the hit for index in it better the performance.
-Jignesh