On Fri, Nov 12, 2010 at 11:43 AM, Tom Lane <tgl@xxxxxxxxxxxxx> wrote: > I think his point is that we already have a proven formula > (Mackert-Lohmann) and shouldn't be inventing a new one out of thin air. > The problem is to figure out what numbers to apply the M-L formula to. I'm not sure that's really measuring the same thing, although I'm not opposed to using it if it produces reasonable answers. > I've been thinking that we ought to try to use it in the context of the > query as a whole rather than for individual table scans; the current > usage already has some of that flavor but we haven't taken it to the > logical conclusion. That's got a pretty severe chicken-and-egg problem though, doesn't it? You're going to need to know how much data you're touching to estimate the costs so you can pick the best plan, but you can't know how much data will ultimately be touched until you've got the whole plan. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance