On Wed, 20 Jun 2007, Tom Lane wrote:
I think what would be much more useful in the long run is some serious study of the parameters themselves. For instance, random_page_cost is a self-admitted oversimplification of reality.
If I could figure out who would sponsor such a study that's what I'd be doing right now. I have studies on many of the commit-related parameters I'll have ready in another few days, those are straightforward to map out. But you know what I have never found? A good benchmark that demonstrates how well complicated queries perform to run studies on things like random_page_cost against. Many of the tuning knobs on the query optimizer seem very opaque to me so far, and I'm not sure how to put together a proper test to illuminate their operation and map out their useful range.
Here's an example of one of the simplest questions in this area to demonstate things I wonder about. Let's say I have a properly indexed database of some moderate size such that you're in big trouble if you do a sequential scan. How can I tell if effective_cache_size is in the right ballpark so it will do what I want to effectively navigate that? People back into a setting for that parameter right now based on memory in their system, but I never see anybody going "since your main table is X GB large, and its index is Y GB, you really need enough memory to set effective_cache_size to Z GB if you want queries/joins on that table to perform well".
-- * Greg Smith gsmith@xxxxxxxxxxxxx http://www.gregsmith.com Baltimore, MD