That sure looks like the source of the problem to me too. I've seen similar behavior in queries not very different from that. It's hard to guess what the problem is exactly without having more knowledge of the data distribution in article_words though. Given the results of analyze, I'd try to run the deepest subquery and try to see if I could get the estimate to match reality, either by altering statistics targets, or tweaking the query to give more information to the planner. For example, i'd check if the number of expected rows from SELECT context_key FROM article_words JOIN words using (word_key) WHERE word = 'insider' is much less accurate than the estimate for SELECT context_key FROM article_words WHERE word_key = (whatever the actual word_key for insider is) >>> Robert Haas <robertmhaas@xxxxxxxxx> 02/10/10 2:31 PM >>> On Wed, Feb 10, 2010 at 3:29 AM, Bryce Nesbitt <bryce2@xxxxxxxxxxxxx> wrote: > Or, if you want to actually read that query plan, try: > http://explain.depesz.com/s/qYq Much better, though I prefer a text attachment... anyhow, I think the root of the problem may be that both of the subquery scans under the append node are seeing hundreds of times more rows than they're expecting, which is causing the planner to choose nested loops higher up that it otherwise might have preferred to implement in some other way. I'm not quite sure why, though. ...Robert -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance