On Fri, Oct 16, 2015 at 9:14 PM, Jonathan Rogers <jrogers@xxxxxxxxxxxxxxx> wrote: > On 10/16/2015 08:37 AM, Albe Laurenz wrote: >> Jonathan Rogers wrote: >>>> Look at the EXPLAIN ANALYZE output for both the custom plan (one of the >>>> first five executions) and the generic plan (the one used from the sixth >>>> time on) and see if you can find and fix the cause for the misestimate. >>> >>> Yes, I have been looking at both plans and can see where they diverge. >>> How could I go about figuring out why Postgres fails to see the large >>> difference in plan execution time? I use exactly the same parameters >>> every time I execute the prepared statement, so how would Postgres come >>> to think that those are not the norm? >> >> PostgreSQL does not consider the actual query execution time, it only >> compares its estimates for there general and the custom plan. >> Also, it does not keep track of the parameter values you supply, >> only of the average custom plan query cost estimate. > > OK, that makes more sense then. It's somewhat tedious for the purpose of > testing to execute a prepared statement six times to see the plan which > needs to be optimized. Unfortunately, there doesn't seem to be any way > to force use of a generic plan in SQL based on Pavel Stehule's reply. Yeah. In the worst case, a query can fail in the generic plan because it depends on the arguments for dubious things like SELECT CASE WHEN _arg = 'TEXT' THEN foo::text ... I'm ok with why those things must fail, but it'd sure be nice to be able to control the switch to the generic plan. merlin -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance