On Mon, Mar 4, 2013 at 7:43 AM, AJ Weber <aweber@xxxxxxxxxxx> wrote: > Great info, I really appreciate the insight. Is there a FAQ/recommended > setup for running pgbench to determine where this might be? (Is there a > reason to setup pgbench differently based on the server's cores/memory/etc?) Well keep in mind that pgbench may or may not represent your real load. However it can be used with custom sql scripts to run a different load than that which it runs by default so you can get some idea of where your peak connections / throughput sits. And let's face it that if you're currently running 500 connections and your peak occurs at 64 then a pooler is gonna make a difference whether you set it to 64 or 100 or 48 etc. The basic starting point on pgbench is to use a scale factor of at least 2x however many connections you'll be testing. You can also do read only transactions to get an idea of what the peak number of connections are for read only versus read/write transactions. If read only transactions peak at say 100 while r/w peak at 24, and your app is 95% read, then you're probably pretty safe setting a pooler to ~100 conns instead of the lower 24. If your app is set to have one pool for read only stuff (say reporting) and another for r/w then you can setup two different poolers but that's a bit of added complexity you may not really need. The real danger with lots of connections comes from having lots and lots of idle connections. Let's say you've got 1000 connections and 950 are idle. Then the server gets a load spike and queries start piling up. Suddenly instead of ~50 active connections that number starts to climb to 100, 200, 300 etc. Given the slower throughput most servers see as the number of active connections climbs your server may slow to a crawl and never recover til you remove load. -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance