Adding limit clause causes very slow query:
explain analyze select * from firma2.dok where doktyyp='J' order by dokumnr
limit 100
"Limit (cost=0.00..4371.71 rows=100 width=1107) (actual
time=33189.971..33189.971 rows=0 loops=1)"
" -> Index Scan using dok_dokumnr_idx on dok (cost=0.00..278740.01
rows=6376 width=1107) (actual time=33189.959..33189.959 rows=0 loops=1)"
" Filter: (doktyyp = 'J'::bpchar)"
"Total runtime: 33190.103 ms"
Without limit is is fast:
explain analyze select * from firma2.dok where doktyyp='J' order by dokumnr
"Sort (cost=7061.80..7077.74 rows=6376 width=1107) (actual
time=0.119..0.119 rows=0 loops=1)"
" Sort Key: dokumnr"
" -> Index Scan using dok_doktyyp on dok (cost=0.00..3118.46 rows=6376
width=1107) (actual time=0.101..0.101 rows=0 loops=1)"
" Index Cond: (doktyyp = 'J'::bpchar)"
"Total runtime: 0.245 ms"
How to fix this without dropping dok_doktyyp index so that limit can safely
used for paged data access ?
indexes:
dok_doktyyp: dok(doktyyp)
dok_dokumnr_idx: dok(dokumnr)
types:
dokumnr int primary key
doktyyp char(1)
Andrus.
Using 8.1.4
--
Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance