We have anywhere from 60-80 background worker processes connecting to Postgres, performing a short task and then disconnecting. The lifetime of these tasks averages 1-3 seconds. I know that there is some connection overhead to Postgres, but I dont know what would be the best way to measure this overheard and/or to determine if its currently an issue at all. If there is a substantial overheard I would think that employing a connection pool like pgbouncer to keep a static list of these connections and then dole them out to the transient workers on demand. So the overall cumulative number of connections wouldnt change, I would just attempt to alleviate the setup/teardown of them so quickly. Is this something that I should look into or is it not much of an issue? Whats the best way to determine if I could benefit from using a connection pool? Thanks. -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance