On Fri, 2006-12-15 at 11:50 +0100, Martijn van Oosterhout wrote: > On Fri, Dec 15, 2006 at 10:28:08AM +0000, Simon Riggs wrote: > > Until we work out a better solution we can fix this in two ways: > > > > 1. EXPLAIN ANALYZE [ [ WITH | WITHOUT ] TIME STATISTICS ] ... > > > > 2. enable_analyze_timer = off | on (default) (USERSET) > > What exactly would this do? Only count actual rows or something? Yes. It's better to have this than nothing at all. > I > wrote a patch that tried statistical sampling, but the figures were too > far off for people's liking. Well, I like your ideas, so if you have any more... Maybe sampling every 10 rows will bring things down to an acceptable level (after the first N). You tried less than 10 didn't you? Maybe we can count how many real I/Os were required to perform each particular row, so we can adjust the time per row based upon I/Os. ISTM that sampling at too low a rate means we can't spot the effects of cache and I/O which can often be low frequency but high impact. > I think the best option is setitimer(), but it's not POSIX so > platform support is going to be patchy. Don't understand that. I thought that was to do with alarms and signals. -- Simon Riggs EnterpriseDB http://www.enterprisedb.com