On Sat, 2006-01-28 at 10:55 -0500, Tom Lane wrote: > > Assuming that "month" means what it sounds like, the above would > result > in running twelve parallel sort/uniq operations, one for each month > grouping, to eliminate duplicates before counting. You've got sortmem > set high enough to blow out RAM in that scenario ... Hrmm, why is it that with a similar query I get a far simpler plan than you describe, and relatively snappy runtime? select date , count(1) as nads , sum(case when premium then 1 else 0 end) as npremium , count(distinct(keyword)) as nwords , count(distinct(advertiser)) as nadvertisers from data group by date order by date asc QUERY PLAN ----------------------------------------------------------------------------------------------- GroupAggregate (cost=0.00..14452743.09 rows=721 width=13) -> Index Scan using data_date_idx on data (cost=0.00..9075144.27 rows=430206752 width=13) (2 rows) =# show server_version; server_version ---------------- 8.1.2 (1 row) -jwb