On Fri, Feb 13, 2009 at 6:35 PM, Kevin Grittner <Kevin.Grittner@xxxxxxxxxxxx> wrote: > You should definitely set effective_cache_size. Wow -- I set this to 10GB (5GB for shared buffers + another 5GB for cache), and today's average write frequency went from 20MB/sec to just 1MB/sec. The documentation suggests that effective_cache_size is only used for query planning in conjunction with indexes. So how come it affects write access? > If you still see the problem after that, I suggest testing different > settings for: > > bgwriter_lru_maxpages > bgwriter_lru_multiplier > checkpoint_segments > checkpoint_timeout > checkpoint_completion_target > > Both the checkpoint process and the background writer changed quite a > bit, and these are the main knobs for tuning the new version. We are hoping to set up a duplicate instance and play back the SQL log against it so we can experiment with different settings. Until we have such a setup, I'm not sure what to do with the knobs other than frob them wildly. :-) Are there any statistics, either in PostgreSQL proper or in the OS, that I can use as metrics to guide the tuning? For example, is there anything in pg_stat_bgwriter that can help me tune the bgwriter_lru_* settings? Do transactions that only contain query statements end up writing entries to the WAL when they commit? If yes, can we avoid the writes by wrapping our queries in "read only" transactions, and would it be worth the effort? Our web application is handling 30 requests per second at peak time; each request is performing dozens queries in autocommit mode, ie. one transaction per query. Only a minority of those requests actually end up modifying the database. PostgreSQL is committing and fsyncing 600-800 transactions per second, so that's probably a good chunk of disk/CPU usage wasted, right? Alexander. -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance