On Thu, Feb 21, 2008 at 5:40 PM, Dave Cramer <pg@xxxxxxxxxxxxx> wrote: > > On 21-Feb-08, at 6:16 PM, Scott Marlowe wrote: > > > On Thu, Feb 21, 2008 at 4:59 PM, Mark Kirkwood > > <markir@xxxxxxxxxxxxxxx> wrote: > >> The other parameter you might want to look at is > >> effective_cache_size - > >> increasing it will encourage index use. On a machine with 16GB the > >> default is probably too small (there are various recommendations > >> about > >> how to set this ISTR either Scott M or Greg Smith had a page > >> somewhere > >> that covered this quite well - guys?). > >> > The default is always too small in my experience. > > What are the rest of the configuration values ? I was thinking that we almost need a matrix of versions and small, typical, large, and too big or whatever for each version, and which hardware configs. max_connections is the one I see abused a lot here. It's a setting that you can set way too high and not notice there's a problem until you go to actually use that many connections and find out your database performance just went south. One should closely monitor connection usage and track it over time, as well as benchmark the behavior of your db under realistic but heavy load. You should know how many connections you can handle in a test setup before things get ugly, and then avoid setting max_connections any higher than about half that if you can do it. Same kind of thinking applies to any resource that has straightline 1:1 increase in resource usage, or a tendency towards that, like work_mem (formerly sort_mem). Dammit, nearly every one really needs it's own mini-howto on how to set it... They all are covered in the runtime config section of the docs. ---------------------------(end of broadcast)--------------------------- TIP 7: You can help support the PostgreSQL project by donating at http://www.postgresql.org/about/donate