Good morning, I've increased sort_mem until 2Go !! and the error "out of memory" appears again. Here the request I try to pass with her explain plan, Nested Loop (cost=2451676.23..2454714.73 rows=1001 width=34) -> Subquery Scan "day" (cost=2451676.23..2451688.73 rows=1000 width=16) -> Limit (cost=2451676.23..2451678.73 rows=1000 width=12) -> Sort (cost=2451676.23..2451684.63 rows=3357 width=12) Sort Key: sum(occurence) -> HashAggregate (cost=2451471.24..2451479.63 rows=3357 width=12) -> Index Scan using test_date on queries_detail_statistics (cost=0.00..2449570.55 rows=380138 width=12) Index Cond: ((date >= '2006-01-01'::date) AND (date <= '2006-01-30'::date)) Filter: (((portal)::text = '1'::text) OR ((portal)::text = '2'::text)) -> Index Scan using query_string_pkey on query_string (cost=0.00..3.01 rows=1 width=34) Index Cond: ("outer".query = query_string.id) (11 rows) Any new ideas ?, thanks MB. > On Tue, 2006-02-14 at 10:32, martial.bizel@xxxxxxx wrote: > > command explain analyze crash with the "out of memory" error > > > > I precise that I've tried a lot of values from parameters shared_buffer and > > sort_mem > > > > now, in config file, values are : > > sort_mem=32768 > > and shared_buffer=30000 > > OK, on the command line, try increasing the sort_mem until hash_agg can > work. With a 4 gig machine, you should be able to go as high as needed > here, I'd think. Try as high as 500000 or so or more. Then when > explain analyze works, compare the actual versus estimated number of > rows. >