On Thu, Apr 19, 2012 at 3:44 AM, Josh Berkus <josh@xxxxxxxxxxxx> >> 7500ms >> http://explain.depesz.com/s/ > This plan seems very odd -- doing individual index lookups on 2.8m rows > is not standard planner behavior. Can you confirm that all of your > other query cost parameters are the defaults? This similat to the issue with limit that Simon was complaining about a few weeks ago [1]. A lot of the estimation in the planner is biased to give overestimations for number of rows returned in the face of uncertainty. This works well for joins but interacts really badly with limits. The two issues here are the join cardinality being overestimated a factor of 15x and then the unique is off by another 50x. The result is that the planner thinks that it needs to scan 0.25% of the input, while actually it needs to scan the whole of it, underestimating the real cost by a factor of 400. I'm not sure what to do about unique node overestimation, but I think it could be coaxed to be less optimistic about the limit by adding an optimization barrier and some selectivity decreasing clauses between the limit and the rest of the query: select * from ( select distinct product_code from product p_ inner join product_parent par_ on p_.parent_id=par_.id where par_.parent_name like 'aa%' offset 0 -- optimization barrier ) as x where product_code = product_code -- reduce selectivity estimate by 200x limit 2; [1] http://archives.postgresql.org/message-id/CA+U5nMLbXfUT9cWDHJ3tpxjC3bTWqizBKqTwDgzebCB5bAGCgg@xxxxxxxxxxxxxx Cheers, Ants Aasma -- Cybertec Schönig & Schönig GmbH Gröhrmühlgasse 26 A-2700 Wiener Neustadt Web: http://www.postgresql-support.de -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance