On Mon, Feb 11, 2013 at 4:29 PM, Will Platnick <wplatnick@xxxxxxxxx> wrote: > We will probably tweak this knob some more -- i.e., what is the sweet spot > between 1 and 100? Would it be higher than 50 but less than 100? Or is it > somewhere lower than 50? > > I would love to know the answer to this as well. We have a similar > situation, pgbouncer with transaction log pooling with 140 connections. > What is the the right value to size pgbouncer connections to? Is there a > formula that takes the # of cores into account? If you can come up with a synthetic benchmark that's similar to what your real load is (size, mix etc) then you can test it and see at what number your throughput peaks and you have good behavior from the server. On a server I built a few years back with 48 AMD cores and 24 Spinners in a RAID-10 for data and 4 drives for a RAID-10 for pg_xlog (no RAID controller in this one as the chassis cooked them) my throughput peaked at ~60 connections. What you'll wind up with is a graph where the throughput keeps climbing as you add clients and at some point it will usually drop off quickly when you pass it. The sharper the drop the more dangerous it is to run your server in such an overloaded situation. -- To understand recursion, one must first understand recursion. -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance