On Sun, Apr 11, 2010 at 4:41 PM, Corin <wakathane@xxxxxxxxx> wrote: > On 11.04.2010 23:18, Scott Marlowe wrote: >> >> In both instances your number of rows estimated is WAY higher than the >> actual number of rows returned. Perhaps if you increased >> default_statistics_target to 100, 200, 500 etc. re-analyzed, and then >> reun explain analyze again. >> >> Also increasing work_mem might encourage the bitmap index scans to occur. >> > > Increasing the statistics >= 500 indeed helped a lot and causes the planner > to choose a good plan. :) > > I'm now thinking about increasing the default_statistics_target of the whole > server from the default (100) to 1000, because I have many tables with > similar data. As the size of the table index seems not change at all, I > wonder how much additional storage is needed? I only care about runtime > performance: are inserts/updates affected by this change? Or is only analyze > affected (only run once during the night)? default stats target has more to do with how many distinct values / ranges of values you have. If your data has a nice smooth curve of distribution smaller values are ok. Large datasets with very weird data distributions can throw off the planner. There's a cost for both analyzing and for query planning. If 500 fixes this table, and all the other tables are fine at 100 then it might be worth doing an alter table alter column for just this column. However, then you've got to worry about time spent monitoring and analyzing queries in the database for if / when they need a higher stats target. Also, look at increasing effective cache size if the db fits into memory. Lowering random page cost helps too. -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance