Re: Huge Data sets, simple queries

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



"Jeffrey W. Baker" <jwbaker@xxxxxxx> writes:
> On Sat, 2006-01-28 at 10:55 -0500, Tom Lane wrote:
>> Assuming that "month" means what it sounds like, the above would result
>> in running twelve parallel sort/uniq operations, one for each month
>> grouping, to eliminate duplicates before counting.  You've got sortmem
>> set high enough to blow out RAM in that scenario ...

> Hrmm, why is it that with a similar query I get a far simpler plan than
> you describe, and relatively snappy runtime?

You can't see the sort operations in the plan, because they're invoked
implicitly by the GroupAggregate node.  But they're there.

Also, a plan involving GroupAggregate is going to run the "distinct"
sorts sequentially, because it's dealing with only one grouping value at
a time.  In the original case, the planner probably realizes there are
only 12 groups and therefore prefers a HashAggregate, which will try
to run all the sorts in parallel.  Your "group by date" isn't a good
approximation of the original conditions because there will be a lot
more groups.

(We might need to tweak the planner to discourage selecting
HashAggregate in the presence of DISTINCT aggregates --- I don't
remember whether it accounts for the sortmem usage in deciding
whether the hash will fit in memory or not ...)

			regards, tom lane


[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux