Scott Carey <scott@xxxxxxxxxxxxxxxxx> wrote: > So how far do you go? 128MB? 32MB? 4MB? Under 8.2 we had to keep shared_buffers less than the RAM on our BBU RAID controller, which had 256MB -- so it worked best with shared_buffers in the 160MB to 200MB range. With 8.3 we found that anywhere from 512MB to 1GB performed better without creating clusters of stalls. In both cases we also had to significantly boost the aggressiveness of the background writer. Since the "sweet spot" is so dependent on such things as your RAID controller and your workload, I *highly* recommend Greg's incremental tuning approach. The rough guidelines which get tossed about make reasonable starting points, but you really need to make relatively small changes with the actual load you're trying to optimize and monitor the metrics which matter to you. On a big data warehouse you might not care if the database becomes unresponsive for a couple minutes every now and then if it means better overall throughput. On a web server, you may not have much problem keeping up with the overall load, but want to ensure reasonable response time. > Anecdotal and an assumption, but I'm pretty confident that on any > server with at least 1GB of dedicated RAM, setting it any lower > than 200MB is not even going to help latency (assuming checkpoint > and log configuration is in the realm of sane, and > connections*work_mem is sane). I would add the assumption that you've got at least 256MB BBU cache on your RAID controller. > The defaults have been so small for so long on most platforms, > that any increase over the default generally helps performance -- > and in many cases dramatically. Agreed. > So if more is better, then most users assume that even more should > be better. That does seem like a real risk. -Kevin -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance