On Thu, Feb 7, 2013 at 7:41 AM, Charles Gomes <charlesrg@xxxxxxxxxxx> wrote: > I've benchmarked shared_buffers with high and low settings, in a server > dedicated to postgres with 48GB my settings are: > shared_buffers = 37GB > effective_cache_size = 38GB > > Having a small number and depending on OS caching is unpredictable, if the > server is dedicated to postgres you want make sure postgres has the memory. > A random unrelated process doing a cat /dev/sda1 should not destroy postgres > buffers. > I agree your problem is most related to dirty background ration, where > buffers are READ only and have nothing to do with disk writes. You make an assertion here but do not tell us of your benchmarking methods. My testing in the past has show catastrophic performance with very large % of memory as postgresql buffers with heavy write loads, especially transactional ones. Many others on this list have had the same thing happen. Also you supposed PostgreSQL has a better / smarter caching algorithm than the OS kernel, and often times this is NOT the case. In this particular instance the OP may not be seeing an issue from too large of a pg buffer, my point still stands, large pg_buffer can cause problems with heavy or even moderate write loads. -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance