On Thu, Apr 14, 2011 at 12:19 AM, Tomas Vondra <tv@xxxxxxxx> wrote: > > Another issue is that when measuring multiple values (processing of > different requests), the decisions may be contradictory so it really > can't be fully automatic. > I don't think it's soooo dependant on workload. It's dependant on access patterns (and working set sizes), and that all can be quantified, as opposed to "workload". I've been meaning to try this for a while yet, and it needs not be as expensive as one would imagine. It just needs a clever implementation that isn't too intrusive and that is customizable enough not to alienate DBAs. I'm not doing database stuff ATM (though I've been doing it for several years), and I don't expect to return to database tasks for a few months. But whenever I get back to it, sure, I'd be willing to invest time on it. What an automated system can do and a DBA cannot, and it's why this idea occurred to me in the first place, is tailor the metrics for variable contexts and situations. Like, I had a DB that was working perfectly fine most of the time, but some days it got "overworked" and sticking with fixed cost variables made no sense - in those situations, random page cost was insanely high because of the workload, but sequential scans would have ran much faster because of OS read-ahead and because of synchroscans. I'm talking of a decision support system that did lots of heavy duty queries, where sequential scans are an alternative. I reckon most OLTP systems are different. So, to make things short, adaptability to varying conditions is what I'd imagine this technique would provide, and a DBA cannot no matter how skilled. That and the advent of SSDs and really really different characteristics of different tablespaces only strengthen my intuition that automation might be better than parameterization. -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance