Re: Shouldn't we have a way to avoid "risky" plans?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Mar 23, 2011 at 2:12 PM, Josh Berkus <josh@xxxxxxxxxxxx> wrote:
> Folks,
>
>...
> It really seems like we should be able to detect an obvious high-risk
> situation like this one.  Or maybe we're just being too optimistic about
> discarding subplans?

Why not letting the GEQO learn from past mistakes?

If somehow a post-mortem analysis of queries can be done and accounted
for, then these kinds of mistakes would be a one-time occurrence.

Ideas:
 *  you estimate cost IFF there's no past experience.
 *  if rowcount estimates miss by much, a correction cache could be
populated with extra (volatile - ie in shared memory) statistics
 *  or, if rowcount estimates miss by much, autoanalyze could be scheduled
 *  consider plan bailout: execute a tempting plan, if it takes too
long or its effective cost raises well above the expected cost, bail
to a safer plan
 *  account for worst-case performance when evaluating plans

-- 
Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance



[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux