On Fri, 9 Jul 2010, Kevin Grittner wrote:
Any thoughts on the "minimalist" solution I suggested a couple weeks ago?: http://archives.postgresql.org/pgsql-hackers/2010-06/msg01385.php http://archives.postgresql.org/pgsql-hackers/2010-06/msg01387.php So far, there has been no comment by anyone....
Interesting idea. As far as I can see, you are suggesting solving the too many connections problem by allowing lots of connections, but only allowing a certain number to do anything at a time?
A proper connection pool provides the following advantages over this: 1. Pool can be on a separate machine or machines, spreading load. 2. Pool has a lightweight footprint per connection, whereas Postgres doesn't. 3. A large amount of the overhead is sometimes connection setup, which this would not solve. A pool has cheap setup. 4. This could cause Postgres backends to be holding onto large amounts of memory while being prevented from doing anything, which is a bad use of resources. 5. A fair amount of the overhead is caused by context-switching between backends. The more backends, the less useful any CPU caches. 6. There are some internal workings of Postgres that involve keeping all the backends informed about something going on. The more backends, the greater this overhead is. (This was pretty bad with the sinval queue overflowing a while back, but a bit better now. It still causes some overhead). 7. That lock would have a metric *($!-load of contention. Matthew -- Unfortunately, university regulations probably prohibit me from eating small children in front of the lecture class. -- Computer Science Lecturer -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance