Franck Routier <franck.routier@xxxxxxxxx> wrote: > Ok, the problem definitely comes from the > default_statistics_target which is obviously too high on the > database. > Now... can someone help me understand what happens ? Where can I > look (in pg_stats ?) to see the source of the problem ? maybe a > column with a huge list of different values the palnner has to > parse ? This is a fundamental issue in query planning -- how much work do you want to do to try to come up with the best plan? Too little, and the plan can be unacceptably slow; too much and you spend more extra time on planning than the improvement in the plan (if any) saves you. Reading and processing statistics gets more expensive as you boost the volume. What I would suggest is taking the default_statistics_target for the cluster back down to the default, and selectviely boosting the statistics target for individual columns as you find plans which benefit. Don't set it right at the edge of the tipping point, but don't automatically jump to 5000 every time either. -- Kevin Grittner EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance