Jeff Janes <jeff.janes@xxxxxxxxx> writes: > On Mon, Feb 17, 2014 at 1:54 PM, Alistair Bayley <alistair@xxxxxxxxxxx>wrote: >> I want to understand why the optimiser is choosing the plan with >> sequential table scans, rather than the plan with index scans. > The planner clamps the estimated number of rows from an index scan at 1 > row, even if it actually believes the number will be 0. That makes the > logical simpler, avoiding needs to test for division by zero all over the > place, and probably makes it more robust to mis-estimation in most use > cases. But in this case, that means it thinks it will find 34 rows, one > from each partition, which is way too high. Even if it believed the zero row estimate it's probably getting internally, the cost estimate wouldn't change much, because as you say it's still got to assume that the index will be traversed to verify that there's no such row(s). I notice though that the cost estimate for the seqscan plan isn't all that much lower than that for the indexscan plan. Probably lowering random_page_cost a bit would change the planner's mind. We have no information about total size of database vs available RAM, but if it's a mostly memory-resident database then such a change would be a good idea. regards, tom lane -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance