Hey everybody. I'm trying to speed up a query (not general optimization,
one query in particular), and I'm not sure if there's any way to get it
to go faster.
The query looks like this
INSERT INTO transaction
(
"tr_acct_num",
"tr_acct_typ",
"tr_atm_rec",
"tr_audit_seq",
"tr_branch_cd",
"tr_cash_amt",
...
"tr_tran_time",
"tr_trn_rev_point",
"tr_typ",
"tr_typ_cd",
"atm_trn_reg_e",
"dataset"
)
SELECT
iq_numeric("tr_acct_num"),
"tr_acct_typ",
iq_numeric("tr_atm_rec"),
iq_numeric("tr_audit_seq"),
iq_numeric("tr_branch_cd"),
iq_numeric("tr_cash_amt"),
...
cast("tr_tran_time" as time),
iq_numeric("tr_trn_rev_point"),
iq_numeric("tr_typ"),
iq_numeric("tr_typ_cd"),
"atm_trn_reg_e",
0
FROM transaction_import
WHERE is_ok = 'TRUE'
;
There's not a lot I seem to be able to do about the select portion of
this query (index on is_ok, the planner didn't even want to use it), but
is there anything I can do to speed up the import?
This is the EXPLAIN ANALYZE on the query
QUERY PLAN
-----------------------------------------------------------------------------------------------------------------------------
Seq Scan on transaction_import (cost=0.00..30953.68 rows=69239
width=434) (actual time=0.146..2974.609 rows=68913 loops=1)
Filter: is_ok
Total runtime: 179091.119 ms
(3 rows)
The query is inserting ~70,000 rows into a table with ~1.8 million rows
already in it. Anybody have any idea how I can keep this query from
taking so long?
--
Tom Hart
IT Specialist
Cooperative Federal
723 Westcott St.
Syracuse, NY 13210
(315) 471-1116 ext. 202
(315) 476-0567 (fax)
---------------------------(end of broadcast)---------------------------
TIP 3: Have you checked our extensive FAQ?
http://www.postgresql.org/docs/faq