2009/3/14 decibel <decibel@xxxxxxxxxxx>
I'd say it would be great for PostgreSQL to replan each execution of query automatically if execution plan tells it would take some factor (say, x100, configurable) more time to execute query then to plan. In this case it would not spend many time planning for small queries, but will use the most efficient plan possible for long queries. And even if a query can't be run better, it would spend only 1/factor time more (1% more time for factor of 100).
On Mar 10, 2009, at 12:20 PM, Tom Lane wrote:True, but what if we planned for both high and low cardinality cases, assuming that pg_stats indicated both were a possibility? We would have to store multiple plans for one prepared statement, which wouldn't work well for more complex queries (if you did high and low cardinality estimates for each table you'd end up with 2^r plans, where r is the number of relations), so we'd need a way to cap it somehow. Of course, whether that's easier than having the ability to throw out a current result set and start over with a different plan is up for debate...
fche@xxxxxxxxxx (Frank Ch. Eigler) writes:
For a prepared statement, could the planner produce *several* plans,
if it guesses great sensitivity to the parameter values? Then it
could choose amongst them at run time.
We've discussed that in the past. "Choose at runtime" is a bit more
easily said than done though --- you can't readily flip between plan
choices part way through, if you've already emitted some result rows.
On a related note, I wish there was a way to tell plpgsql not to pre-plan a query. Sure, you can use EXECUTE, but building the query plan is a serious pain in the rear.
I'd say it would be great for PostgreSQL to replan each execution of query automatically if execution plan tells it would take some factor (say, x100, configurable) more time to execute query then to plan. In this case it would not spend many time planning for small queries, but will use the most efficient plan possible for long queries. And even if a query can't be run better, it would spend only 1/factor time more (1% more time for factor of 100).