Search Postgresql Archives

Re: Increase Query Speed

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 27 Jul 2010, at 21:48, Jamie Kahgee wrote:

> EXPLAIN ANALYZE SELECT page, count(page) as impressions FROM campaign_impressions WHERE campaign = 42 and "timestamp" BETWEEN '2010-05-21 00:00:00' AND '2010-07-27 00:00:00' group by page order by impressions;
>                                                                                    QUERY PLAN                                                                                    
> ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
>  Sort  (cost=106059.36..106059.40 rows=16 width=4) (actual time=2209.808..2209.816 rows=109 loops=1)
>    Sort Key: (count(page))
>    Sort Method:  quicksort  Memory: 30kB
>    ->  HashAggregate  (cost=106058.84..106059.04 rows=16 width=4) (actual time=2209.749..2209.765 rows=109 loops=1)

Looks fine up to here.

>          ->  Bitmap Heap Scan on campaign_impressions  (cost=19372.78..102534.06 rows=704956 width=4) (actual time=424.023..1980.987 rows=1010896 loops=1)
>                Recheck Cond: (campaign = 42)
>                Filter: (("timestamp" >= '2010-05-21 00:00:00'::timestamp without time zone) AND ("timestamp" <= '2010-07-27 00:00:00'::timestamp without time zone))

Here's your problem. There are about a million rows matching these criteria.

Now, a million rows of width 4 in 2 seconds is (if I interpret row width correctly) about 2MB/s, so that's possibly not topping your I/O subsystem. That probably means that those rows are all over the table-file, which means Postgres needs to fetch them through random disk I/O.

It would probably help to cluster that table on the campaign_impressions_timestamp_idx index. At least most of the rows will then be in chronological order, so disk I/O would be much more efficient (if it isn't already in that order, of course!).

Another possibility would be to create a summary table that sums up the count of pages by day, so that you would only need to query for the sum of relatively few records. The summary table can then be kept up to date by triggers or something - that's up to you.

>                ->  Bitmap Index Scan on campaign_impressions_campaign_idx  (cost=0.00..19196.54 rows=1039330 width=0) (actual time=421.587..421.587 rows=1044475 loops=1)
>                      Index Cond: (campaign = 42)

Nothing wrong here either.

>  Total runtime: 2209.869 ms
> (10 rows)

Alban Hertroys

--
If you can't see the forest for the trees,
cut the trees and you'll see there is no forest.


!DSPAM:737,4c4f43dc286213192919587!



-- 
Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Postgresql Jobs]     [Postgresql Admin]     [Postgresql Performance]     [Linux Clusters]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Postgresql & PHP]     [Yosemite]
  Powered by Linux