Re: anti-join chosen even when slower than old plan

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Nov 11, 2010 at 09:15:58AM -0500, Mladen Gogala wrote:
> Kenneth Marshall wrote:
>> I agree with the goal of avoiding the need for a GUC. This needs to
>> be as automatic as possible. One idea I had had was computing a value
>> for the amount of cache data in the system by keeping a sum or a
>> weighted sum of the table usage in the system. Smaller tables and
>> indexes would contribute a smaller amount to the total, while larger
>> indexes and tables would contribute a larger amount. Then by comparing
>> this running total to the effective_cache_size, set the random and
>> sequential costs for a query. This would allow the case of many 4MB
>> tables to favor disk I/O more than memory I/O. The weighting could
>> be a function of simultaneous users of the table. I know this is a
>> bit of hand-waving but some sort of dynamic feedback needs to be
>> provided to the planning process as system use increases.
>>
>> Regards,
>> Ken
>>
>>   
> Kenneth, you seem to be only concerned with the accuracy of the planning 
> process, not with the plan stability. As a DBA who has to monitor real 
> world applications, I find things like an execution plan changing with the 
> use of the system to be my worst nightmare. The part where you say that 
> "this needs to be as automatic as possible" probably means that I will not 
> be able to do anything about it, if the optimizer, by any chance, doesn't 
> get it right. That looks to me like an entirely wrong way to go.
> When application developer tunes the SQL both him and me expect that SQL to 
> always perform that way, not to change the execution plan because the 
> system is utilized more than it was 1 hour ago. Nobody seems to have taken 
> my suggestion about having a parameter
> which would simply "invent" the percentage out of thin air seriously, 
> because it's obviously not accurate.
> However, the planner accuracy is not the only concern. Running applications 
> on the system usually requires plan stability. Means of
> external control of the execution plan, DBA knobs and buttons that can be 
> turned and pushed to produce the desired plan are also very much desired.
>
> -- 
> Mladen Gogala Sr. Oracle DBA
> 1500 Broadway
> New York, NY 10036
> (212) 329-5251
> www.vmsinfo.com 
>
Hi Mladen,

I think in many ways, this is the same problem. Because we are not
correctly modeling the system, the plan choices are not accurate
either for some scenarios. This means that when plan costs are
compared, the evaluation is not accurate. This is what causes the
terrible plans being right next to the good plans and is what
impacts the "plan stability". If the costs are correct, then in
fact the plan stability will be much better with the better
costing, not worse. Plans with close costs should actually have
close performance.

Regards,
Ken

-- 
Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux