Simon Riggs <simon@xxxxxxxxxxxxxxx> writes: > The way I'm seeing it, you can't assume the LIMIT will apply to any > IndexScan that doesn't have an index condition. If it has just a > filter, or nothing at all, just an ordering then it could easily scan > the whole index if the stats are wrong. That statement applies with equal force to *any* plan with a LIMIT; it's not just index scans. The real question is to what extent are the tuples satisfying the extra filter condition randomly distributed with respect to the index order (or physical order, if it's a seqscan). The existing cost estimation code effectively assumes that they're perfectly uniformly distributed; which is a good average-case assumption but can be horribly wrong in the worst case. If we could settle on some other model for the probable distribution of the matching tuples, we could adjust the cost estimates for LIMIT accordingly. I have not enough statistics background to know what a realistic alternative would be. Another possibility is to still assume a uniform distribution but estimate for, say, a 90% probability instead of 50% probability that we'll find enough tuples after scanning X amount of the table. Again, I'm not too sure what that translates to in terms of the actual math, but it sounds like something a statistics person could do in their sleep. I do not think we should estimate for the worst case though. If we do, we'll hear cries of anguish from a lot of people, including many of the same ones complaining now, because the planner stopped picking fast-start plans even for cases where they are orders of magnitude faster than the alternatives. regards, tom lane -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance