Martijn van Oosterhout <kleptog@xxxxxxxxx> writes: > On Fri, Dec 15, 2006 at 12:20:46PM +0000, Simon Riggs wrote: >> Maybe sampling every 10 rows will bring things down to an acceptable >> level (after the first N). You tried less than 10 didn't you? > Yeah, it reduced the number of calls as the count got larger. It broke > somewhere, though I don't quite remember why. The fundamental problem with it was the assumption that different executions of a plan node will have the same timing. That's not true, in fact not even approximately true. IIRC the patch did realize that first-time-through is not a predictor for the rest, but some of our plan nodes have enormous variance even after the first time. I think the worst case is batched hash joins. regards, tom lane