Julien Cigar <jcigar@xxxxxxxxx> wrote: > try to increase cpu_tuple_cost to 0.1 I agree that's on the right track, but possibly an overly blunt tool for the job. The following settings are likely to need adjustment, IMO: effective_cache_size: People often set this to somewhere in the range of 50% to 75% of the RAM on the machine. This setting does not allocate RAM, but tells the planner how likely it is to find things in cache for, say, repeated index access. A higher setting makes the random access involved in index scans seem like less of a problem. random_page_cost: You seem to have a very high cache hit ratio, between shared_buffers and the OS cache. To model this you should decrease random_page_cost to something just above seq_page_cost or equal to it. To reflect the relatively low cost of reading a page from the OS cache (compared to actually reading from disk) you might want to reduce both of these below 1. 0.1 is a not-uncommon setting for instances with the active portion of the database well-cached. cpu_tuple_cost: I always raise this; I think our default is just too low to accurately model the cost of reading a row, compared to the cost factors used for other things. In combination with the above changes I've never had to go beyond 0.03 to get a good plan. I've pushed it to 0.05 to see if that put me near a tipping point for a bad plan, and saw no ill effects. I've never tried higher than 0.05, so I can't speak to that. In any event, your current cost settings aren't accurately modeling actual costs in your environment for your workload. You need to adjust them. One of the estimates was off, so increasing the statistics sample size might help, but I suspect that you need to make adjustments like the above in any event. -- Kevin Grittner EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance