On Tue, Nov 29, 2011 at 7:21 PM, Tyler Hains <thains@xxxxxxxxxxxxxxxxxx> wrote: > # explain analyze select * from cards where card_set_id=2850 order by > card_id limit 1; > QUERY PLAN > ----------------------------------------------------------------------------------------------------------------------------------------- > Limit (cost=0.00..105.19 rows=1 width=40) (actual time=6026.947..6026.948 > rows=1 loops=1) > -> Index Scan using cards_pkey on cards (cost=0.00..2904875.38 > rows=27616 width=40) (actual time=6026.945..6026.945 rows=1 loops=1) > Filter: (card_set_id = 2850) > Total runtime: 6026.985 ms > (4 rows) > I believe this is the old problem of the planner expecting that the card_set_id's are randomly distributed over the card_ids . This is not the case, I guess? The planner expects to quickly hit a matching record while scanning the primary key, an there is a nasty surprise. It seems there is no perfect solution, things You might want to try: -fooling with random_page_cost/seq_tuple_cost/work_mem -"order by card_id-1" -an index on (card_set_id, card_id) Greetings Marcin -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general