Michael Stone <mstone+postgres@xxxxxxxxx> writes: > Yes. I was looking at the other side; I thought pg could estimate how > much work it would have to do to hit the limit, but double-checking it > looks like it can't. Yes, it does, you just have to understand how to interpret the EXPLAIN output. Peter had Limit (cost=0.00..622.72 rows=100 width=8) (actual time=207356.054..207356.876 rows=100 loops=1) -> Index Scan using activity_pk on activity (cost=0.00..40717259.91 rows=6538650 width=8) (actual time=207356.050..207356.722 rows=100 loops=1) Filter: ((state = 10000) OR (state = 10001)) Total runtime: 207357.000 ms Notice that the total cost of the LIMIT node is estimated as far less than the total cost of the IndexScan node. That's exactly because the planner doesn't expect the indexscan to run to completion. The problem here appears to be a non-random correlation between state and activity, such that the desired state values are not randomly scattered in the activity sequence. The planner doesn't know about that correlation and hence can't predict the poor startup time. regards, tom lane