Re: Any better plan for this query?..

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Dimitri wrote:
any idea if there is a more optimal execution plan possible for this query:

select S.REF as stref, S.NAME as stnm, H.HORDER as hord, H.BEGIN_DATE as hbeg,
        H.END_DATE as hend, H.NOTE as hnote
         from HISTORY H, STAT S
         where S.REF = H.REF_STAT
         and H.REF_OBJECT = '0000000001'
         order by H.HORDER ;

EXPLAIN ANALYZE output on 8.4:
                                                                   QUERY PLAN
------------------------------------------------------------------------------------------------------------------------------------------------
 Sort  (cost=4549.75..4555.76 rows=2404 width=176) (actual
time=1.341..1.343 rows=20 loops=1)
   Sort Key: h.horder
   Sort Method:  quicksort  Memory: 30kB
   ->  Hash Join  (cost=33.50..4414.75 rows=2404 width=176) (actual
time=1.200..1.232 rows=20 loops=1)
         Hash Cond: (h.ref_stat = s.ref)
         ->  Index Scan using history_ref_idx on history h
(cost=0.00..4348.20 rows=2404 width=135) (actual time=0.042..0.052
rows=20 loops=1)
               Index Cond: (ref_object = '0000000001'::bpchar)
         ->  Hash  (cost=21.00..21.00 rows=1000 width=45) (actual
time=1.147..1.147 rows=1000 loops=1)
               ->  Seq Scan on stat s  (cost=0.00..21.00 rows=1000
width=45) (actual time=0.005..0.325 rows=1000 loops=1)
 Total runtime: 1.442 ms
(10 rows)

Table HISTORY contains 200M rows, only 20 needed
Table STAT contains 1000 rows, only 20 needed to be joined to HISTORY values.

The bad doesn't look too bad to me, although the planner is over-estimating the number of matches in the history table (2404 vs 20). That's a bit surprising given how simple the predicate is. Make sure you've ANALYZEd the table. If that's not enough, you can try to increase the statistics target for ref_object column, ie. ALTER TABLE history ALTER COLUMN ref_object SET STATISTICS 500. That might give you a different plan, maybe with a nested loop join instead of hash join, which might be faster in this case.

--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

--
Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux