Re: Planner mis-estimation using nested loops followup

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



At 00:24 08/03/19, Matthew wrote:
On Tue, 18 Mar 2008, Chris Kratz wrote:
In moderately complex to very complex ad hoc queries in our system, we were consistently having the system massively underestimate the number of rows coming out of join at a low level making these queries very slow and inefficient.

I have long thought that perhaps Postgres should be a little more cautious about its estimates, and assume the worst case scenario sometimes, rather than blindly following the estimates from the statistics. The problem is that Postgres uses the statistics to generate best estimates of the cost. However, it does not take into account the consequences of being wrong. If it was more clever, then it may be able to decide to use a non-optimal algorithm according to the best estimate, if the optimal algorithm has the possibility of blowing up to 1000 times the work if the estimates are off by a bit.

Such cleverness would be very cool, but (I understand) a lot of work. It would hopefully solve this problem.

Matthew

Just a crazy thought. If Postgres could check its own estimates or set some limits while executing the query and, if it found that the estimates were way off, fall back to a less optimal plan immediately or the next time, that would be cool.

KC

--
Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux