On Monday May 22 2006 4:43 pm, Tom Lane wrote: > "Jim C. Nasby" <jnasby@xxxxxxxxxxxxx> writes: > > On Wed, May 17, 2006 at 10:29:14PM -0400, Tom Lane wrote: > >> The reason the default is currently 10 is just > >> conservatism: it was already an order of magnitude better > >> than what it replaced (a *single* representative value) and > >> I didn't feel I had the evidence to justify higher values. > >> It's become clear that the default ought to be higher, but > >> I've still got no good fix on a more reasonable default. > >> 100 might be too much, or then again maybe not. > > > > Is the only downside to a large value planning speed? It > > seems it would be hard to bloat that too much, except in > > cases where people are striving for millisecond response > > times, and those folks had better know enough about tuning > > to be able to adjust the stats target... > > It would be nice to have some *evidence*, not unsupported > handwaving. Not exactly related the topic at hand, but we set the column specific target to 50 before it would use the plan. Then we set it to values upto 500, analyzing after each alter, and for each each of those fell back into seq scanning. Of 50, 60, 70, 100, 200, 300, 500, only 50 was foolproof. I have no idea why. I would have only expected the plan to get better with higher targets. Ed