On Fri, Mar 23, 2007 at 12:41:58PM +0100, Henrik Zagerholm wrote: > 23 mar 2007 kl. 12:33 skrev Michael Fuhr: > >The row count estimate for fk_filetype_id = 83 is high by an order > >of magnitude: > > > >>Bitmap Index Scan on tbl_file_idx6 (cost=0.00..25.65 rows=1251 > >>width=0) (actual time=21.958..21.958 rows=112 loops=1) > >> > >>Index Cond: (fk_filetype_id = 83) > > > >Have you run ANALYZE or VACUUM ANALYZE on these tables recently? > >If so then you might try increasing the statistics target for > >tbl_file.fk_filetype_id and perhaps some of the columns in the join > >conditions. > > I did a vacuum full and reindex on all tables. VACUUM FULL is seldom (if ever) necessary if you're running plain VACUUM (without FULL) often enough, either manually or via autovacuum. > Now I also did a vacuum analyze on tbl_acl (the biggest table with > about 4.5 millin rows) > > Same result. I'd suggest analyzing all tables. The bad estimate I mentioned appears to be for a column in tbl_file so if you didn't analyze that table then the query plan probably won't improve. > But do you mean if the row_count estimate is big it can't use any > index on any other table within the JOINs? High row count estimates make the planner think that scanning entire tables would be faster than using indexes. The more of a table a query must fetch the less efficient an index scan becomes, to the point that a sequential scan is faster than an index scan. > Any specific parameters I should adjust? If analyzing the tables doesn't improve the row count estimates then try increasing some columns' statistics targets and re-analyze the table or just that column. Example: ALTER TABLE tbl_file ALTER fk_filetype_id SET STATISTICS 100; ANALYZE tbl_file (fk_filetype_id); The default statistics target is 10; the maximum value is 1000. On some systems I've found that reducing random_page_cost from 4 (the default) to 2 gives more realistic cost estimates for index scans. -- Michael Fuhr