Henrik Zagerholm wrote: >> Have you run ANALYZE or VACUUM ANALYZE on these tables recently? >> If so then you might try increasing the statistics target for >> tbl_file.fk_filetype_id and perhaps some of the columns in the join >> conditions. > > I did a vacuum full and reindex on all tables. > > Now I also did a vacuum analyze on tbl_acl (the biggest table with about > 4.5 millin rows) > > Same result. You only updated the statistics for 1 table. Run ANALYZE on all your tables and measure again. > But do you mean if the row_count estimate is big it can't use any index > on any other table within the JOINs? You misunderstand. Tom is saying that the estimated counts (based on the data from VACUUM ANALYZE) are very different from the actual row counts (based on the data from EXPLAIN ANALYZE). That means your statistics are off, and those drive the query planner. If you have bad statistics, you can (and probably will) get bad query plans. > Any specific parameters I should adjust? If the statistics are still off this much after running analyze, you can try changing the statistics sizes for the columns where the statistics are off the most. There is also the possibility that the planner chooses an expensive plan because it doesn't have a choice. Low memory is a likely cause in such cases. Make sure you configure enough shared memory and that postgres is configured to use it. -- Alban Hertroys alban@xxxxxxxxxxxxxxxxx magproductions b.v. T: ++31(0)534346874 F: ++31(0)534346876 M: I: www.magproductions.nl A: Postbus 416 7500 AK Enschede // Integrate Your World //