Hi Dave,
Thanks to reply.
I run it now in a Postgres 8.1.4 my notebook (win XP) and the
performance is really much better:
EXPLAIN ANALYZE
SELECT Contrato.Id
, Min( prog.dtsemeio ) AS DtSemIni
, Max( prog.dtsemeio ) AS DtSemFim
, Min( prog.dtembarque ) AS DtEmbIni
, Max( prog.dtembarque ) AS DtEmbFim
, Min( prog.dtentrega ) AS DtEntIni
, Max( prog.dtentrega ) AS DtEntFim
, COUNT(prog.*) AS QtSem
, SUM( CASE WHEN Prog.DtSemeio >= '20060814' THEN 1 ELSE 0 END )
AS QtSemAb
FROM bvz.Contrato
LEFT OUTER JOIN bvz.Prog ON prog.Fk_Contrato = Contrato.Id
WHERE Contrato.Fk_Clifor = 243
GROUP BY 1;
GroupAggregate (cost=2.18..7312.45 rows=42 width=48) (actual
time=0.446..13.195 rows=42 loops=1)
-> Nested Loop Left Join (cost=2.18..7291.22 rows=883 width=48)
(actual time=0.103..10.518 rows=1536 loops=1)
-> Index Scan using pk_contrato on contrato (cost=0.00..100.29
rows=42 width=4) (actual time=0.048..3.163 rows=42 loops=1)
Filter: (fk_clifor = 243)
-> Bitmap Heap Scan on prog (cost=2.18..170.59 rows=50
width=48) (actual time=0.027..0.132 rows=37 loops=42)
Recheck Cond: (prog.fk_contrato = "outer".id)
-> Bitmap Index Scan on fki_prog_contrato
(cost=0.00..2.18 rows=50 width=0) (actual time=0.018..0.018 rows=37
loops=42)
Index Cond: (prog.fk_contrato = "outer".id)
Total runtime: 13.399 ms
Where I can see the current random_page_cost value ? There are some hint
about what value I must set ?
Thanks in advance.
Luiz
Dave Dutcher wrote:
Well, in this case the queries with LEFT OUTER join and with
inner join
returns the same result set. I don´t have the sufficient knowledge to
affirm , but I suspect that if the query plan used for
fk_clifor = 352
and with left outer join is applied for the first query
(fk_clifor = 243
with left outer join)
we will have a better total runtime.
There are some manner to make this test ?
It looks like Postgres used a nested loop join for the fast query and a
merge join for the slow query. I don't think the left join is causing any
problems. On the slower query the cost estimate of the nested loop must
have been higher than the cost estimate of the merge join because of more
rows. You could try disabling merge joins with the command "set
enable_mergejoin=false". Then run the explain analyze again to see if it is
faster.
If it is faster without merge join, then you could try to change your
settings to make the planner prefer the nested loop. I'm not sure what the
best way to do that is. Maybe you could try reducing the random_page_cost,
which should make index scans cheaper.
Dave