2010/10/1 Fabrício dos Anjos Silva <fabricio.silva@xxxxxxxxxxxxxx>
Craig,
I agree with you. Not completely, but I do.
I'm just stuck in a situation where I can't figure out what values to use for the parameters. I can't even think of a way on how to test and discover that.
I followed Josh Berkus' GUC spreadsheet and some tutorials on PG wiki, but how do I test if my configuration is good or bad? I see in PG log that some queries have bad plans, but should I do in order to tell PG to make better decisions? I tried different values with no success.
You can set different values for most configuration params on individual db connections. You can test different values for individual slow-running queries. Rather than disabling whole features in the entire database - which may well make lots of other queries run less quickly - you can, at the very least, just disable those features before running the queries that are known to be slow and for which you could not find global values which worked well. Disable sequence plans just before running query x, or boost work_mem to a very high value just for query y. It is also possible that you've simply outstripped your hardware's capability. We had a database with a number of tables containing tens of millions of rows and queries which frequently required aggregating over whole tables. Moving from 8Gb of RAM to 48GB of RAM (so that a large chunk of the db fits in memory) and from 6 spindles to 12, and then just modifying the global config to suit the new hardware gave us a huge performance boost that we could never have gotten on the old hardware, no matter how much tuning of individual queries we did. I was actually able to drop all of the custom config tweaks that we had on individual queries, though I'm sure I'll eventually wind up adding some back - queries that aggregate over large tables really benefit from a lot of work_mem - more than I want to configure globally.