On Thu, Jan 27, 2011 at 10:56 AM, Mladen Gogala <mladen.gogala@xxxxxxxxxxx> wrote:
You're still using a 14 row table, though. Postgres isn't going to be stupid enough to use an index in this case when the seq scan is clearly faster unless you go out of your way to absolutely force it to do so. If the table is going to be "fairly large", that's the size you need to be testing and tuning with.
I even tried with an almost equivalent outer join:
explain analyze select e1.empno,e1.ename,e2.empno,e2.ename
from emp e1 left outer join emp e2 on (e1.mgr=e2.empno);
QUERY PLAN
--------------------------------------------------------------------------------
------------------------------
Nested Loop Left Join (cost=0.00..7.25 rows=14 width=16) (actual time=0.028..0
.105 rows=14 loops=1)
Join Filter: (e1.mgr = e2.empno)
-> Seq Scan on emp e1 (cost=0.00..2.14 rows=14 width=10) (actual time=0.006
..0.010 rows=14 loops=1)
-> Materialize (cost=0.00..2.21 rows=14 width=8) (actual time=0.001..0.003
rows=14 loops=14)
-> Seq Scan on emp e2 (cost=0.00..2.14 rows=14 width=8) (actual time=
0.001..0.005 rows=14 loops=1)
Total runtime: 0.142 ms
(6 rows)
This gives me the same result as the recursive version, minus the level column. I am porting an application from Oracle, there is a fairly large table that is accessed by "connect by". Rewriting it as a recursive join is not a problem, but the optimizer doesn't really use the indexes.
You're still using a 14 row table, though. Postgres isn't going to be stupid enough to use an index in this case when the seq scan is clearly faster unless you go out of your way to absolutely force it to do so. If the table is going to be "fairly large", that's the size you need to be testing and tuning with.
--
- David T. Wilson
david.t.wilson@xxxxxxxxx