Re: bad plan: 8.4.8, hashagg, work_mem=1MB.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Jon Nelson <jnelson+pgsql@xxxxxxxxxxx> writes:
> I ran a query recently where the result was very large. The outer-most
> part of the query looked like this:

>  HashAggregate  (cost=56886512.96..56886514.96 rows=200 width=30)
>    ->  Result  (cost=0.00..50842760.97 rows=2417500797 width=30)

> The row count for 'Result' is in the right ballpark, but why does
> HashAggregate think that it can turn 2 *billion* rows of strings (an
> average of 30 bytes long) into only 200?

200 is the default assumption about number of groups when it's unable to
make any statistics-based estimate.  You haven't shown us any details so
it's hard to say more than that.

			regards, tom lane

-- 
Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux