Lowering random_page_cost didn't help -- I've tried values 2.0 and 1.5. Then I tried "order by id -1" hack Marcin Mańk proposed... a9-dev=> create index foo on records(source_id, (id - 1)); CREATE INDEX a9-dev=> explain analyze select * from records where source_id ='http://www.wbc.poznan.pl/dlibra/oai-pmh-repository.xml' order by (id -1) limit 200; QUERY PLAN ------------------------------------------------------------------------------------------------------------------------------------- Limit (cost=0.00..379.42 rows=200 width=1124) (actual time=0.137..255.283 rows=200 loops=1) -> Index Scan using foo on records (cost=0.00..1864617.14 rows=982887 width=1124) (actual time=0.137..255.237 rows=200 loops=1) Index Cond: ((source_id)::text = 'http://www.wbc.poznan.pl/dlibra/oai-pmh-repository.xml'::text) Total runtime: 255.347 ms (4 rows) Significant improvement :) As we can see, it is possible to query records fast without changing table structure. Question is: can I do it without "hacks"? Michal Nowak -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance