Le vendredi 10 février 2012 20:32:50, Josh Berkus a écrit : > On 2/9/12 2:41 PM, Peter van Hardenberg wrote: > > Hmm, perhaps we could usefully aggregate auto_explain output. > > The other option is to take a statistical approach. After all, what you > want to do is optimize average response times across all your user's > databases, not optimize for a few specific queries. > > So one thought would be to add in pg_stat_statements to your platform > ... something I'd like to see Heroku do anyway. Then you can sample > this across dozens (or hundreds) of user databases, each with RPC set to > a slightly different level, and aggregate it into a heat map. > > That's the way I'd do it, anyway. in such set up, I sometime build a ratio between transactions processed and CPU usage, many indicators exists, inside and outside DB, that are useful to combine and use just as a 'this is normal behavior'. It turns to be easy in the long term to see if things go better or worse. -- Cédric Villemain +33 (0)6 20 30 22 52 http://2ndQuadrant.fr/ PostgreSQL: Support 24x7 - Développement, Expertise et Formation -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance