Jonathan Rogers wrote: >> Look at the EXPLAIN ANALYZE output for both the custom plan (one of the >> first five executions) and the generic plan (the one used from the sixth >> time on) and see if you can find and fix the cause for the misestimate. > > Yes, I have been looking at both plans and can see where they diverge. > How could I go about figuring out why Postgres fails to see the large > difference in plan execution time? I use exactly the same parameters > every time I execute the prepared statement, so how would Postgres come > to think that those are not the norm? PostgreSQL does not consider the actual query execution time, it only compares its estimates for there general and the custom plan. Also, it does not keep track of the parameter values you supply, only of the average custom plan query cost estimate. The problem is either that the planner underestimates the cost of the generic plan or overestimates the cost of the custom plans. If you look at the EXPLAIN ANALYZE outputs (probably with http://explain.depesz.com ), are there any row count estimates that differ significantly from reality? Yours, Laurenz Albe -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance