Re: Pooling in Core WAS: Need help in performance tuning.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Jul 9, 2010 at 11:33 PM, Craig Ringer
<craig@xxxxxxxxxxxxxxxxxxxxx> wrote:
> On 10/07/2010 9:25 AM, Josh Berkus wrote:
>>
>>> It *is* the last place you want to put it, but putting it there can
>>> be much better than not putting it *anywhere*, which is what we've
>>> often seen.
>>
>> Well, what you proposed is an admission control mechanism, which is
>> *different* from a connection pool, although the two overlap.  A
>> connection pool solves 4 problems when it's working:
>>
>> a) limiting the number of database server processes
>> b) limiting the number of active concurrent queries
>> c) reducing response times for allocating a new connection
>> d) allowing management of connection routes to the database
>> (redirection, failover, etc.)
>
> I agree with you: for most Pg users (a) is really, really important. As you
> know, in PostgreSQL each connection maintains not only general connection
> state (GUC settings, etc) and if in a transaction, transaction state, but
> also a query executor (full backend). That gets nasty not only in memory
> use, but in impact on active query performance, as all those query executors
> have to participate in global signalling for lock management etc.
>
> So an in-server pool that solved (b) but not (a) would IMO not be
> particularly useful for the majority of users.
>
> That said, I don't think it follows that (a) cannot be solved in-core. How
> much architectural change would be required to do it efficiently enough,
> though...

Right, let's not confuse Kevin's argument that we should have
connection pooling in core with advocacy for any particular patch or
feature suggestion that he may have offered on some other thread.  A
very simple in-core connection pooler might look something like this:
when a session terminates, the backend doesn't exit.  Instead, it
waits for the postmaster to reassign it to a new connection, which the
postmaster does in preference to starting new backends when possible.
But if a backend doesn't get assigned a new connection within a
certain period of time, then it goes ahead and exits anyway.

You might argue that this is not really a connection pooler at all
because there's no admission control, but the point is you're avoiding
the overhead of creating and destroying backends unnecessarily.  Of
course, I'm also relying on the unsubstantiated assumption that it's
possible to pass a socket connection between processes.

Another approach to the performance problem is to try to find ways of
reducing the overhead associated with having a large number of
backends in the system.  That's not a connection pooler either, but it
might reduce the need for one.

Still another approach is admission control based on transactions,
backends, queries, memory usage, I/O, or what have you.

None of these things are mutually exclusive.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise Postgres Company

-- 
Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance



[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux