On 10/14/2015 05:00 AM, Albe Laurenz wrote: > Jonathan Rogers wrote: >> I have a very complex SELECT for which I use PREPARE and then EXECUTE. >> The first five times I run "explain (analyze, buffers) execute ..." in >> psql, it takes about 1s. Starting with the sixth execution, the plan >> changes and execution time doubles or more. The slower plan is used from >> then on. If I DEALLOCATE the prepared statement and PREPARE again, the >> cycle is reset and I get five good executions again. >> >> This behavior is utterly mystifying to me since I can see no reason for >> Postgres to change its plan after an arbitrary number of executions, >> especially for the worse. When I did the experiment on a development >> system, Postgres was doing nothing apart from the interactively executed >> statements. No data were inserted, no settings were changed and no other >> clients were active in any way. Is there some threshold for five or six >> executions of the same query? >> >> Without delving into the plans themselves yet, what could possibly cause >> the prepared statement to be re-planned? I have seen the same behavior >> on Postgres 9.2.10 and 9.4.1. > > You are encountering "custom plans", introduced in 9.2. > > When a statement with parameters is executed, PostgreSQL will not only generate > a generic plan, but for the first 5 executions it will substitute the arguments > and generate and execute a custom plan for that. > > After 5 executions, the cost of the generic plan is compared to the average > of the costs of the custom plans. If the cost is less, the generic plan will > be used from that point on. If the cost is more, a custom plan will be used. > > So what you encounter is probably caused by bad estimates for either > the custom plan or the generic plan. Thanks. That does explain what I've seen. > > Look at the EXPLAIN ANALYZE output for both the custom plan (one of the > first five executions) and the generic plan (the one used from the sixth > time on) and see if you can find and fix the cause for the misestimate. Yes, I have been looking at both plans and can see where they diverge. How could I go about figuring out why Postgres fails to see the large difference in plan execution time? I use exactly the same parameters every time I execute the prepared statement, so how would Postgres come to think that those are not the norm? > > Other than that, you could stop using prepared statements, but that is > probably not the optimal solution. This is probably what I'll end up doing. The statement preparation is the result of a custom layer that does so universally and I'll probably just turn that feature off. -- Jonathan Rogers Socialserve.com by Emphasys Software jrogers@xxxxxxxxxxxxxxxxxxxxx -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance