On Thu, Apr 16, 2009 at 10:11 AM, Kevin Grittner <Kevin.Grittner@xxxxxxxxxxxx> wrote: > Tom Lane <tgl@xxxxxxxxxxxxx> wrote: >> Bear in mind that those limits exist to keep you from running into >> exponentially increasing planning time when the size of a planning >> problem gets big. "Raise 'em to the moon" isn't really a sane > strategy. >> It might be that we could get away with raising them by one or two > given >> the general improvement in hardware since the values were last > looked >> at; but I'd be hesitant to push the defaults further than that. > > I also think that there was a change somewhere in the 8.2 or 8.3 time > frame which mitigated this. (Perhaps a change in how statistics were > scanned?) The combination of a large statistics target and higher > limits used to drive plan time through the roof, but I'm now seeing > plan times around 50 ms for limits of 20 and statistics targets of > 100. Given the savings from the better plans, it's worth it, at least > in our case. > > I wonder what sort of testing would be required to determine a safe > installation default with the current code. Well, given all the variables, maybe we should instead bet targeting plan time, either indirectly vi estimated values, or directly by allowing a configurable planning timeout, jumping off to alternate approach (nestloopy style, or geqo) if available. merlin -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance