> > > Hash Join (cost=154.46..691776.11 rows=10059626 width=100) (actual > time=5.191..37551.360 rows=10063432 loops=1) > Hash Cond: (a.order_id = > o.order_id) > -> Seq Scan on cust_acct a (cost=0.00..540727.26 rows=10059626 > width=92) (actual time=0.022..18987.095 rows=10063432 > loops=1) > -> Hash (cost=124.76..124.76 rows=2376 width=12) (actual > time=5.135..5.135 rows=2534 > loops=1) > -> Seq Scan on cust_orders o (cost=0.00..124.76 rows=2376 > width=12) (actual time=0.011..2.843 rows=2534 loops=1) > Total runtime: 43639.105 > ms > (6 rows) > I am thinking so this time is adequate - processing of 10 mil rows result must be slow a tips: * recheck a seq. read speed - if this is about expected values * play with work_mem - probably is not enough for one bucket - you can decrease time about 10-20 sec, but attention to going to swap - EXPLAIN ANALYZE VERBOSE show a number of buckets - ideal is one. * use a some filter if it's possible * use a limit if it's possible if you really should to process all rows and you need better reaction time, try to use a cursor. It is optimized for fast first row Regards Pavel Stehule > > > > > > > > > > -- > --------------------------------------------- > Kevin Kempter - Constent State > A PostgreSQL Professional Services Company > www.consistentstate.com > --------------------------------------------- > > > -- > --------------------------------------------- > Kevin Kempter - Constent State > A PostgreSQL Professional Services Company > www.consistentstate.com > --------------------------------------------- -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance