Dimitri <dimitrik.fr@xxxxxxxxx> wrote: > The idea is good, but *only* pooling will be not enough. I mean if > all what pooler is doing is only keeping no more than N backends > working - it'll be not enough. You never know what exactly your > query will do - if you choose your N value to be sure to not > overload CPU and then some of your queries start to read from disk - > you waste your idle CPU time because it was still possible to run > other queries requiring CPU time rather I/O, etc... I never meant to imply that CPUs were the only resources which mattered. Network and disk I/O certainly come into play. I would think that various locks might count. You have to benchmark your actual workload to find the sweet spot for your load on your hardware. I've usually found it to be around (2 * cpu count) + (effective spindle count), where effective spindle count id determined not only by your RAID also your access pattern. (If everything is fully cached, and you have no write delays because of a BBU RAID controller with write-back, effective spindle count is zero.) Since the curve generally falls off more slowly past the sweet spot than it climbs to get there, I tend to go a little above the apparent sweet spot to protect against bad performance in a different load mix than my tests. > I wrote some ideas about an "ideal" solution here (just omit the > word "mysql" - as it's a theory it's valable for any db engine): > http://dimitrik.free.fr/db_STRESS_MySQL_540_and_others_Apr2009.html#note_5442 I've seen similar techniques used in other databases, and I'm far from convinced that it's ideal or optimal. -Kevin -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance