Re: out of memory

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Here the result with hashAgg to false :
 Nested Loop  (cost=2487858.08..2490896.58 rows=1001 width=34) (actual
time=1028044.781..1030251.260 rows=1000 loops=1)
   ->  Subquery Scan "day"  (cost=2487858.08..2487870.58 rows=1000 width=16)
(actual time=1027996.748..1028000.969 rows=1000 loops=1)
         ->  Limit  (cost=2487858.08..2487860.58 rows=1000 width=12) (actual
time=1027996.737..1027999.199 rows=1000 loops=1)
               ->  Sort  (cost=2487858.08..2487866.47 rows=3357 width=12)
(actual time=1027996.731..1027998.066 rows=1000 loops=1)
                     Sort Key: sum(occurence)
                     ->  GroupAggregate  (cost=2484802.05..2487661.48 rows=3357
width=12) (actual time=810623.035..914550.262 rows=19422774 loops=1)
                           ->  Sort  (cost=2484802.05..2485752.39 rows=380138
width=12) (actual time=810612.248..845427.013 rows=36724340 loops=1)
                                 Sort Key: query
                                 ->  Index Scan using test_date on
queries_detail_statistics  (cost=0.00..2449570.55 rows=380138 width=12) (actual
time=25.393..182029.205 rows=36724340 loops=1)
                                       Index Cond: ((date >= '2006-01-01'::date)
AND (date <= '2006-01-30'::date))
                                       Filter: (((portal)::text = '1'::text) OR
((portal)::text = '2'::text))
   ->  Index Scan using query_string_pkey on query_string  (cost=0.00..3.01
rows=1 width=34) (actual time=2.244..2.246 rows=1 loops=1000)
         Index Cond: ("outer".query = query_string.id)
 Total runtime: 1034357.390 ms
(14 rows)


thanks

table daily has 250 millions records
and field query (bigint) 2 millions, occurence is int.

request with HashAggregate is OK when date is restricted about 15 days like :

  SELECT query_string, DAY.ocu from search_data.query_string,
 (SELECT SUM(occurence) as ocu, query
FROM daily.queries_detail_statistics
 WHERE date >= '2006-01-01' AND date <= '2006-01-15'
 AND portal IN (1,2)
 GROUP BY query
 ORDER BY ocu DESC
 LIMIT 1000) as DAY
 WHERE DAY.query=id;

> On Wed, 2006-02-15 at 09:55, martial.bizel@xxxxxxx wrote:
> > Good morning,
> >
> >
> >
> >
> > I've increased sort_mem until 2Go !!
> > and the error "out of memory" appears again.
> >
> > Here the request I try to pass with her explain plan,
> >
> >  Nested Loop  (cost=2451676.23..2454714.73 rows=1001 width=34)
> >    ->  Subquery Scan "day"  (cost=2451676.23..2451688.73 rows=1000
> width=16)
> >          ->  Limit  (cost=2451676.23..2451678.73 rows=1000 width=12)
> >                ->  Sort  (cost=2451676.23..2451684.63 rows=3357 width=12)
> >                      Sort Key: sum(occurence)
> >                      ->  HashAggregate  (cost=2451471.24..2451479.63
> rows=3357
> > width=12)
> >                            ->  Index Scan using test_date on
> > queries_detail_statistics  (cost=0.00..2449570.55 rows=380138 width=12)
> >                                  Index Cond: ((date >= '2006-01-01'::date)
> AND
> > (date <= '2006-01-30'::date))
> >                                  Filter: (((portal)::text = '1'::text) OR
> > ((portal)::text = '2'::text))
> >    ->  Index Scan using query_string_pkey on query_string  (cost=0.00..3.01
> > rows=1 width=34)
> >          Index Cond: ("outer".query = query_string.id)
> > (11 rows)
>
> OK, so it looks like something is horrible wrong here.  Try running the
> explain analyze query after running the following:
>
>  set enable_hashagg=off;
>
> and see what you get then.
>
> ---------------------------(end of broadcast)---------------------------
> TIP 1: if posting/reading through Usenet, please send an appropriate
>        subscribe-nomail command to majordomo@xxxxxxxxxxxxxx so that your
>        message can get through to the mailing list cleanly
>




[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux