I've benchmarked shared_buffers with high and low settings, in a server dedicated to postgres with 48GB my settings are:
shared_buffers = 37GB effective_cache_size = 38GB Having a small number and depending on OS caching is unpredictable, if the server is dedicated to postgres you want make sure postgres has the memory. A random unrelated process doing a cat /dev/sda1 should not destroy postgres buffers. I agree your problem is most related to dirty background ration, where buffers are READ only and have nothing to do with disk writes. From: strahinjak@xxxxxxxxxxx Date: Thu, 7 Feb 2013 13:06:53 +0100 Subject: Re: postgresql.conf recommendations To: kgrittn@xxxxxxxxx CC: johnnydtan@xxxxxxxxx; ac@xxxxxx; jkrupka@xxxxxxxxx; alex@xxxxxxxxxxxxxxxxx; pgsql-performance@xxxxxxxxxxxxxx As others suggested having shared_buffers = 48GB is to large. You should never need to go above 8GB. I have a similar server and mine has This looks like a problem of dirty memory being flushed to the disk. You should set your monitoring to monitor dirty memory from /proc/meminfo and check if it has any correlation with the slowdowns. Also vm.dirty_background_bytes should always be a fraction of vm.dirty_bytes, since when there is more than vm.dirty_bytes bytes dirty it will stop all writing to the disk until it flushes everything, while when it reaches the vm.dirty_background_bytes it will slowly start flushing those pages to the disk. As far as I remember vm.dirty_bytes should be configured to be a little less than the cache size of your RAID controller, while vm.dirty_background_bytes should be 4 times smaller.shared_buffers = 8GB checkpoint_completion_target = 0.9 Strahinja Kustudić | System Engineer | Nordeus On Wed, Feb 6, 2013 at 10:12 PM, Kevin Grittner <kgrittn@xxxxxxxxx> wrote:
|