On Tue, 2020-04-07 at 15:51 -0400, David Gauthier wrote: > But for my edification, is it roughly true that 2 connections working with the DB 100% > of the time is equivalent to 20 connections @ 10% = 200 connections @ 1 % (if you know what I mean) ? Roughly, yes. There is a certain overhead, as each query has to take a snapshot of the database, which has to consider all active connections. But the main problem is that you have no way to ensure that all those idle connections stay idle. Imagine that most of these sessions issue short statements against "table1". There are also some longer SELECTs. No problem so far. Now along comes a statement that blocks the table for a very short time, like a CREATE INDEX CONCURRENTLY. Unfortunately, that ACCESS EXCLUSIVE lock has to wait for one of the longer SELECTs to finish. Now all those moderately short statements cannot run any more, but they have to queue behind the ACCESS EXCLUSIVE lock. For a while, nothing happens. Then, as soon as the CREATE INDEX CONCURRENTLY has got its lock, done its work and finished, all hell breaks loose, because all sessions start executing their statements at the same time. I have seen databases servers go down because of such events, and the problem is the high number of connections, even if they are idle most of the time. Yours, Laurenz Albe -- Cybertec | https://www.cybertec-postgresql.com