In case there's any doubt, the questions below aren't rhetorical. Matthew Wakeling <matthew@xxxxxxxxxxx> wrote: > Interesting idea. As far as I can see, you are suggesting solving > the too many connections problem by allowing lots of connections, > but only allowing a certain number to do anything at a time? Right. > A proper connection pool provides the following advantages over > this: > > 1. Pool can be on a separate machine or machines, spreading load. Sure, but how would you do that with a built-in implementation? > 2. Pool has a lightweight footprint per connection, whereas > Postgres doesn't. I haven't compared footprint of, say, a pgpool connection on the database server to that of an idle PostgreSQL connection. Do you have any numbers? > 3. A large amount of the overhead is sometimes connection setup, > which this would not solve. A pool has cheap setup. This would probably be most useful where the client held a connection for a long time, not for the "login for each database transaction" approach. I'm curious how often you think application software uses that approach. > 4. This could cause Postgres backends to be holding onto large > amounts of memory while being prevented from doing anything, > which is a bad use of resources. Isn't this point 2 again? If not, what are you getting at? Again, do you have numbers for the comparison, assuming the connection pooler is running on the database server? > 5. A fair amount of the overhead is caused by context-switching > between backends. The more backends, the less useful any CPU > caches. Would this be true while a backend was blocked? Would this not be true for a connection pool client-side connection? > 6. There are some internal workings of Postgres that involve > keeping all the backends informed about something going on. The > more backends, the greater this overhead is. (This was pretty > bad with the sinval queue overflowing a while back, but a bit > better now. It still causes some overhead). Hmmm... I hadn't thought about that. Again, any numbers (e.g., profile information) on this? > 7. That lock would have a metric *($!-load of contention. Here I doubt you. It would be held for such short periods that I suspect that collisions would be relatively infrequent compared to some of the other locks we use. As noted in the email, it may actually normally be an "increment and test" within an existing locked block. Also, assuming that any "built in" connection pool would run on the database server, why would you think the contention for this would be worse than for whatever is monitoring connection count in the pooler? -Kevin -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance