David Gauthier <davegauthierpg@xxxxxxxxx> writes: > After looking at some of the factors that can affect this, I think it may > be important to know that most of the connections will be almost idle (in > terms of interacting with the DB). The "users" are perl/dbi scripts which > connect to the DB and spend the vast majority of the time doing things > other than interacting with the DB. So a connection is consumed, but it's > not really working very hard with the DB per-se. I am cleaning up some of > that code by strategically connecting/disconnecting only when a DB > interaction is required. But for my edification, is it roughly true that 2 > connections working with the DB 100% of the time is equivalent to 20 > connections @ 10% = 200 connections @ 1 % (if you know what I mean) ? Based on that additional info, I would definitely follow Laurenz's suggestion. Long time since I used Perl DBI, but I'm pretty sure there is is support for connection pools or you can use one of the PG connection pooling solutions. There is a fixed memory allocation per connection, so 2 connections at 100% is not the same as 20 connections @ 10%. Using a connection pool is usually the first thing I will setup. If additional connections are still required, then I would increase the limit in small jumps - definitely would not go from 100 to 500. BTW running PG on a virtual is not an issue in itself - this is very common these days. However, I would ensure you are up-to-date wrt latest minor release for that version and would use clients with the same version as the master. -- Tim Cross