Josh Berkus <josh@xxxxxxxxxxxx> writes: >> If the planner starts operating on the basis of worst case rather than >> expected-case performance, the complaints will be far more numerous than >> they are today. > Yeah, I don't think that's the way to go. The other thought I had was > to accumulate a "risk" stat the same as we accumulate a "cost" stat. > However, I'm thinking that I'm overengineering what seems to be a fairly > isolated problem, in that we might simply need to adjust the costing on > this kind of a plan. mergejoinscansel doesn't currently try to fix up the histogram bounds by consulting indexes. At the time I was afraid of the costs of doing that, and I still am; but it would be a way to address this issue. Author: Tom Lane <tgl@xxxxxxxxxxxxx> Branch: master Release: REL9_0_BR [40608e7f9] 2010-01-04 02:44:40 +0000 When estimating the selectivity of an inequality "column > constant" or "column < constant", and the comparison value is in the first or last histogram bin or outside the histogram entirely, try to fetch the actual column min or max value using an index scan (if there is an index on the column). If successful, replace the lower or upper histogram bound with that value before carrying on with the estimate. This limits the estimation error caused by moving min/max values when the comparison value is close to the min or max. Per a complaint from Josh Berkus. It is tempting to consider using this mechanism for mergejoinscansel as well, but that would inject index fetches into main-line join estimation not just endpoint cases. I'm refraining from that until we can get a better handle on the costs of doing this type of lookup. regards, tom lane -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance