mark <markkicks@xxxxxxxxx> writes: > EXPLAIN ANALYZE select * from pokes where uid = 578439028 order by id > DESC limit 6; > QUERY PLAN > ---------------------------------------------------------------------------------------------------------------------------------------------------- > Limit (cost=0.00..9329.02 rows=6 width=135) (actual > time=13612.247..13612.247 rows=0 loops=1) > -> Index Scan Backward using pokes_pkey on pokes > (cost=0.00..5182270.69 rows=3333 width=135) (actual > time=13612.245..13612.245 rows=0 loops=1) > Filter: (uid = 578439028) > Total runtime: 13612.369 ms > (4 rows) The problem is the vast disconnect between the estimated and actual rowcounts for the indexscan (3333 vs 0). The planner thinks there are three thousand rows matching uid = 578439028, and that encourages it to try a plan that's only going to be fast if at least six such rows show up fairly soon while scanning the index in reverse id order. What you really want it to do here is scan on the uid index and then sort the result by id ... but that will be slow in exactly the case where this plan is fast, ie, when there are a lot of matching uids. Bottom line: the planner cannot make the right choice between these alternatives unless it's got decent statistics about the frequency of uid values. "I analyzed the table about a week ago" is not good enough maintenance policy --- you need current stats, and you might need to bump up the statistics target to get enough data about less-common values of uid. (Since it's 8.3, the autovac daemon might have been analyzing for you, if you didn't turn off autovacuum. In that case increasing the statistics target is the first thing to try.) regards, tom lane