On 03/24/2014 06:27 AM, Brett Wooldridge wrote: > This was one of the reasons I was proposing the fixed pool design. In my > experience, even in pools that maintain a minimum number of idle > connections, responding to spike demands is problematic. If you have a > pool with say 30 max. connections, and a 10 minimum idle connection goal, a > sudden spike demand for 20 connections means the pool can satisfy 10 > instantly but then is left to [try to] establish 10 connections before the > application's connectionTimeout (read acquisition timeout from the pool) is > reached. This in turn generates a spike demand on the database slowing > down not only the connection establishments themselves but also slowing > down the completion of transactions that might actually return connections > to the pool. So what makes sense really depends on what your actual connection pattern is. Idle connections aren't free; aside from PostgreSQL lock-checking overhead, they hold on to any virtual memory allocated to them when they were working. In the aggregate, this can add up to quite a bit of memory, which can then cause the OS to decide not to cache some data you could really use. Now, if your peak is 100 connections and your median is 50, this doesn't signify. But I know more than a few workloads where the peak is 1000 and the median is 25, and in that case you want to drop the idle connections gradually. The key is to keep enough of a buffer of ready connections to deal with the next peak when it comes in. That also means that even if the pool is a fixed size, you want to rotate in and out the actual sessions, so that they don't hang onto maximum virtual memory indefinitely. -- Josh Berkus PostgreSQL Experts Inc. http://pgexperts.com -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance