Jeremy Haile wrote:
Here is the explain analyze output:
Well, the row estimates are about as far out as you can get:
-> Index Scan using activity_log_facts_pkey on
activity_log_facts (cost=0.00..1831613.82 rows=1539298
width=12) (actual time=0.050..0.050 rows=0 loops=1)
-> Index Scan Backward using activity_log_facts_pkey on
activity_log_facts (cost=0.00..1831613.82 rows=1539298
width=12) (actual time=0.004..0.004 rows=0 loops=1)
-> Index Scan using activity_log_facts_dtcreatedate_idx on
activity_log_facts (cost=0.00..5406927.50 rows=1539298
width=12) (actual time=100221.953..100221.953 rows=0 loops=1)
-> Index Scan Backward using
activity_log_facts_dtcreatedate_idx on activity_log_facts
(cost=0.00..5406927.50 rows=1539298 width=12) (actual
time=56367.364..56367.364 rows=0 loops=1)
Hmm - it's using the indexes on dtCreateDate and nlogid which seems
broadly sensible, and then plans to limit the results for min()/max().
However, it's clearly wrong about how many rows will satisfy
nlogid > (select max(a.end_nlogid) from activity_log_import_history a)
select min(nlogid) as start_nlogid,
max(nlogid) as end_nlogid,
min(dtCreateDate) as start_transaction_timestamp,
max(dtCreateDate) as end_transaction_timestamp
from activity_log_facts
where nlogid > ( select max(a.end_nlogid) from
activity_log_import_history a)
and dtCreateDate < '2006-12-18 9:10'
If you run explain on the other forms of your query, I'd guess it's much
more accurate. There's a simple way to see if that is the issue. Run the
sub-query and substitute the actual value returned into the query above.
Then, try the same but with a prepared query. If it's down to nlogid
estimates then the first should be fast and the second slow.
--
Richard Huxton
Archonet Ltd