On Fri, 1 Jun 2007, Gregory Stewart wrote:
Is our configuration flawed?
For sure. The bad news is that you have a good chunk of work to do; the good news is that you should quickly see a dramatic improvement as that progresses.
Anyways, watching the system processes we realized that PostgreSQL is only using about 300 Mbytes for itself. Also, both cores are usually maxed out to 100% usage. Are we expecting too much from our server?
Your server isn't even running close to its capacity right now. Start by following the instructions at http://www.westnet.com/~gsmith/content/postgresql/pg-5minute.htm to tune your system so it actually is using much more of your memory. When you run a manual VACUUM ANALYZE as it recommends, you'll probably discover you have to increase max_fsm_pages. The follow-up references at the bottom of that page will lead you to several tuning guides that will go into more detail about other things you might do.
The first obvious thing is that your extremely low work_mem setting is likely throttling all your joins; read http://www.postgresql.org/docs/8.1/interactive/runtime-config-resource.html to understand how that setting works, then test some of your queries after increasing it and see how things improve (note that you have to be careful making comparisons here because if you run exactly the same query twice, the second time will usually be better because the data is cached).
Next, if your settings for checkpoint_settings is at the default, that would be a killer with your workload as well.
That should get you started. If you still aren't happy with performance after all that, post again with some details about your disk configuration and an EXPLAIN plan for something that's moving slowly.
-- * Greg Smith gsmith@xxxxxxxxxxxxx http://www.gregsmith.com Baltimore, MD