Search Postgresql Archives

Re: Regarding EXPLAIN and width calculations

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Nov 19, 2010 at 1:09 PM, Tom Lane <tgl@xxxxxxxxxxxxx> wrote:
> Jon Nelson <jnelson+pgsql@xxxxxxxxxxx> writes:
>> On Fri, Nov 19, 2010 at 12:14 PM, Tom Lane <tgl@xxxxxxxxxxxxx> wrote:
>>> Hard to comment about this with such an incomplete view of the situation
>>> --- in particular, data types would be a critical factor, and I also
>>> wonder if you're admitting to all the columns involved.
>
>> Here is an example that, while super ugly, does show the problem:
>
> Hm. ÂIn the UNION case, we have a measured average width for the inet
> column from ANALYZE for each table, and we just use that. ÂIn the
> sub-select case, it seems to be falling back to a default estimate for
> the datatype, which surprises me a bit --- at least in HEAD it seems
> like it should be smarter. ÂI'll go look at that.
>
> As for the rowcount estimates, these aren't the same query so there's no
> reason for them to be the same. ÂIn the UNION case, it's basically
> taking the pessimistic assumption that there are no duplicate rows;
> given the lack of cross-column stats there's no way to be much smarter.
> In the GROUP BY case, the question is how many distinct values of 'a'
> you suppose there are. ÂThis is also pretty hard to be rigorous about
> --- we have an idea of that for each table, but no idea how many
> cross-table duplications there are. ÂThe 200 is just a default estimate
> when it has no idea. ÂWe could probably come up with some better though
> not rigorous estimate, but nobody's worked on it.

I've run into this '200' issue a *lot* (see the "HashAggregate
consumes all memory before crashing" issue I asked about earlier).  If
I might, perhaps a couple of alternatives seem more reasonable to me:

1. calculate the value as a percentage of the total rows (like, say, 15%)
2. make the value a variable, so that it could be set globally or even
per-query. it would be really nice to be able to *tell* the planner
"there is an expected 30% overlap between these tables".

Could you point me in the general direction in the source as to where
the 200 value comes from? With tables with hundreds of millions or
billions or rows I see the value 40,000. Both 200 and 40,000 seem like
arbitrary values - perhaps they are calculated similarly?

-- 
Jon

-- 
Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general



[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Postgresql Jobs]     [Postgresql Admin]     [Postgresql Performance]     [Linux Clusters]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Postgresql & PHP]     [Yosemite]
  Powered by Linux