On Tue, Aug 6, 2013 at 7:46 AM, Ivan Voras <ivoras@xxxxxxxxxxx> wrote: > ivoras=# explain analyze select * from lt where id > 900000 limit 10; > QUERY PLAN > ---------------------------------------------------------------------------------------------------------------- > Limit (cost=0.00..1.71 rows=10 width=9) (actual > time=142.669..142.680 rows=10 loops=1) > -> Seq Scan on lt (cost=0.00..17402.00 rows=101630 width=9) > (actual time=142.665..142.672 rows=10 loops=1) > Filter: (id > 900000) > Total runtime: 142.735 ms > (4 rows) I think the problem lies in assuming the sequential scan will have 0 startup cost, which is not the case here (it will have to scan up to the first page with an id > 900000). If that were to be fixed, the index scan would be chosen instead. I don't see a way to fix it without querying the index, which could be a costly operation... except with the newly proposed minmax indexes. I guess here's another case where those indexes would be useful. -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance