Hi Mariel,
On 9/7/2017 7:02 AM, Mariel Cherkassky wrote:
I'm pretty sure that the source tables are changing during the loop. I
have no problem showing the query :
SELECT AREA,
PHONE,
TERM_CODE,
LINE_NO,
PAYMENT_START_DATE,
PAYMENT_END_DATE
FROM TC_FINTERMS A
WHERE AREA IN ('32', '75', '31', '35') -- Iditb 04/12/2011
AND TERM_TYPE = '1'
AND TERM_CODE NOT IN (15,255,180,182)
AND (PAYMENT_END_DATE IS NULL OR (PAYMENT_END_DATE >
current_timestamp AND
date_trunc('day', PAYMENT_END_DATE) <> date_trunc('day',
PAYMENT_START_DATE)))
AND not exists(SELECT 3
FROM BL_NLFR
WHERE AREA IN ('75', '35') -- Iditb 04/12/2011
and phone = a.phone)
AND NOT EXISTS(SELECT 2
FROM TV_FINTERM
WHERE TERM_CODE = A.TERM_CODE
AND (TERM_TYPE_CODE = 556 or term_type_ss = 1::varchar))
AND NOT EXISTS(SELECT 1
FROM NAP_IP_DEBIT B,
PS_RF_INST_PROD C
WHERE B.INST_PROD_ID = C.INST_PROD_ID
AND C.SETID = 'SHARE'
AND C.NAP_MAKAT_CD <> 7
AND NAP_AREA2 = A.AREA
AND NAP_PHONE_NUM = A.PHONE
AND (B.NAP_FINTERM_END_DT IS NULL OR
B.NAP_FINTERM_END_DT > current_timestamp)
AND NAP_BILLING_CATNUM = A.TERM_CODE
AND A.LINE_NO = B.NAP_FINTERMS_LINE);
expalin query :
I don't understand the point of the not-exists sub-selects ... they
return a single value (1, 2, or 3) or nothing depending - but then that
value isn't compared to anything in TC_FINTERMS. If you were intending
to "select top <n>" then you should know Postgresql doesn't support the
"top" keyword ... you need to use "limit" [and sort first in the order
matters].
As far as I can see, those sub-selects are just wasting time without
affecting the results at all.
-------------------------------
Nested Loop Anti Join (cost=67766.53..1008863.68 rows=1 width=38)
-> Nested Loop Anti Join (cost=67766.25..1008853.87 rows=1 width=38)
-> Hash Anti Join (cost=67765.12..1008805.25 rows=1 width=38)
Hash Cond: (a.term_code = tv_finterm.term_code)
-> Bitmap Heap Scan on tc_finterms a
(cost=48268.39..843129.37 rows=1647089 width=38)
Recheck Cond: ((((term_type)::text = '1'::text)
AND (payment_end_date IS NULL)) OR (((term_type)::text = '1'::
text) AND (payment_end_date > now())))
Filter: (((area)::text = ANY
('{32,75,31,35}'::text[])) AND (term_code <> ALL
('{15,255,180,182}'::integer[]))
AND ((payment_end_date IS NULL) OR ((payment_end_date > now()) AND
(date_trunc('day'::text, payment_end_date) <> date_trunc('day':
:text, payment_start_date)))))
-> BitmapOr (cost=48268.39..48268.39
rows=1867571 width=0)
-> Bitmap Index Scan on mariel_tc_finterms
(cost=0.00..32332.45 rows=1272789 width=0)
Index Cond: (((term_type)::text =
'1'::text) AND (payment_end_date IS NULL))
-> Bitmap Index Scan on mariel_tc_finterms
(cost=0.00..15112.39 rows=594782 width=0)
Index Cond: (((term_type)::text =
'1'::text) AND (payment_end_date > now()))
-> Hash (cost=18808.47..18808.47 rows=55061 width=4)
-> Seq Scan on tv_finterm (cost=0.00..18808.47
rows=55061 width=4)
Filter: ((term_type_code = 556) OR
((term_type_ss)::text = '1'::text))
-> Nested Loop (cost=1.13..24.87 rows=1 width=23)
-> Index Scan using ps_rf_inst_prod_comb1 on
ps_rf_inst_prod c (cost=0.56..12.23 rows=2 width=17)
Index Cond: (((nap_phone_num)::text =
(a.phone)::text) AND ((nap_area2)::text = (a.area)::text))
Filter: ((nap_makat_cd <> '7'::numeric) AND
((setid)::text = 'SHARE'::text))
-> Index Only Scan using mariel_nap on nap_ip_debit b
(cost=0.56..6.31 rows=1 width=22)
Index Cond: ((inst_prod_id =
(c.inst_prod_id)::text) AND (nap_billing_catnum = (a.term_code)::numeric))
Filter: (((nap_finterm_end_dt IS NULL) OR
(nap_finterm_end_dt > now())) AND (a.line_no = (nap_finterms_line)::
double precision))
-> Index Only Scan using bl_nlfr_ix1 on bl_nlfr (cost=0.28..18.40
rows=5 width=7)
Index Cond: ((area = ANY ('{75,35}'::text[])) AND (phone =
(a.phone)::text))
-------------------------------
Judging from the explaination [and modulo those odd sub-selects], this
should be reasonably quick as a normal batch query ... but according to
the estimates, some of those index scans are on 500K - 1M rows, which is
not something you want to be repeating many times under an open cursor.
How can I use temporary tables ?
First run your complex query and place the results in a temp table. Then
run your cursor over the temp table. And finally, drop the temp table
because, if you want to run the function again, "select into" will fail
if the target table already exists.
select <result columns>
into temp table <tablename>
<the complex query>
open <cursor> for
select * from <tablename>
:
close <cursor>
drop <tablename>
see https://www.postgresql.org/docs/current/static/sql-selectinto.html
The temp table won't change under the open cursor, and so there won't be
any isolation issues. If the performance is *still* bad after doing
this, then it's a server or extension issue.
George
--
Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance