Re: anti-join chosen even when slower than old plan

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



"Kevin Grittner" <Kevin.Grittner@xxxxxxxxxxxx> writes:
> Robert Haas <robertmhaas@xxxxxxxxx> wrote:
>> Unfortunately, to know how much data we're going to grovel
>> through, we need to know the plan; and to decide on the right
>> plan, we need to know how much data we're going to grovel through.
 
> And that's where they've been ending.
 
> The only half-sane answer I've thought of is to apply a different
> cost to full-table or full-index scans based on the ratio with
> effective cache size.

This might have some connection to some rather half-baked ideas I've
been having in connection with the generalized-inner-indexscan problem.
I don't have anything in the way of a coherent presentation to make yet,
but the thing I'm being forced to realize is that sane modeling of a
complex subplan that's on the inside of a nestloop join requires
treating *every* scan type as having different costs "the first time"
versus "during rescan".  If the total amount of data touched in the
query is less than effective_cache_size, it's not unreasonable to
suppose that I/O costs during rescan might be zero, even for a seqscan or
a non-parameterized indexscan.  In fact, only parameterized indexscans
would be able to touch pages they'd not touched the first time, and so
they ought to have higher not lower rescan costs in this environment.
But once the total data volume exceeds effective_cache_size, you have to
do something different since you shouldn't any longer assume the data is
all cached from the first scan.  (This isn't quite as hard as the case
you're talking about, since I think the relevant data volume is the sum
of the sizes of the tables used in the query; which is easy to
estimate at the start of planning, unlike the portion of the tables
that actually gets touched.)

An idea that isn't even half-baked yet is that once we had a cost model
like that, we might be able to produce plans that are well-tuned for a
heavily cached environment by applying the "rescan" cost model even to
the first scan for a particular query.  So that might lead to some sort
of "assume_cached" GUC parameter, and perhaps Kevin could tune his
reporting queries by turning that off instead of messing with individual
cost constants.  Or maybe we could be smarter if we could extract an
estimate for the amount of data touched in the query ... but like you,
I don't see a good way to get that number soon enough.

			regards, tom lane

-- 
Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux