Re: Shouldn't we have a way to avoid "risky" plans?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> If the planner starts operating on the basis of worst case rather than
> expected-case performance, the complaints will be far more numerous than
> they are today.

Yeah, I don't think that's the way to go.  The other thought I had was
to accumulate a "risk" stat the same as we accumulate a "cost" stat.

However, I'm thinking that I'm overengineering what seems to be a fairly
isolated problem, in that we might simply need to adjust the costing on
this kind of a plan.

Also, can I say that the cost figures in this plan are extremely
confusing?  Is it really necessary to show them the way we do?

Merge Join  (cost=29.16..1648.00 rows=382 width=78) (actual
time=57215.167..57215.216 rows=1 loops=1)
   Merge Cond: (rn.node_id = device_nodes.node_id)
   ->  Nested Loop  (cost=0.00..11301882.40 rows=6998 width=62) (actual
time=57209.291..57215.030 rows=112 loops=1)
         Join Filter: (node_ep.node_id = rn.node_id)
         ->  Nested Loop  (cost=0.00..11003966.85 rows=90276 width=46)
(actual time=0.027..52792.422 rows=90195 loops=1)

The first time I saw the above, I thought we had some kind of glibc math
bug on the host system.  Costs are supposed to accumulate upwards.

-- 
                                  -- Josh Berkus
                                     PostgreSQL Experts Inc.
                                     http://www.pgexperts.com

-- 
Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux