Search Postgresql Archives

Re: Limiting number of connections to PostgreSQL per IP (not per DB/user)?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 29.11.2011 23:38, Merlin Moncure wrote:
> On Tue, Nov 29, 2011 at 7:49 AM, Heiko Wundram <modelnine@xxxxxxxxxxxxx> wrote:
>> Hello!
>>
>> Sorry for that subscribe post I've just sent, that was bad reading on my
>> part (for the subscribe info on the homepage).
>>
>> Anyway, the title says it all: is there any possibility to limit the number
>> of connections that a client can have concurrently with a PostgreSQL-Server
>> with "on-board" means (where I can't influence which user/database the
>> clients use, rather, the clients mostly all use the same user/database, and
>> I want to make sure that a single client which runs amok doesn't kill
>> connectivity for other clients)? I could surely implement this with a proxy
>> sitting in front of the server, but I'd rather implement this with
>> PostgreSQL directly.
>>
>> I'm using (and need to stick with) PostgreSQL 8.3 because of the frontend
>> software in question.
>>
>> Thanks for any hints!
> 
> I think the (hypothetical) general solution for these types of
> problems is to have logon triggers.  It's one of the (very) few things
> I envy from SQL Server -- see  here:
> http://msdn.microsoft.com/en-us/library/bb326598.aspx.

I'd like to have logon triggers too, but I don't think that's the right
solution for this problem. For example the logon triggers would be
called after forking the backend, which means overhead.

The connection limits should be checked when creating the connection
(validation username/password etc.), before creating the backend.

Anyway, I do have an idea how this could be done using a shared library
(so it has the same disadvantages as logon triggers). Hopefully I'll
have time to implement a PoC of this over the weekend.

> Barring the above, if you can trust the client to call a function upon
> connection I'd just do that and handle the error on the client with a
> connection drop. Barring *that*, I'd be putting my clients in front of
> pgbouncer with some patches to the same to get what I needed
> (pgbouncer is single threaded making firewally type features quite
> easy to implement in an ad hoc fashion).

The connection pooler somehow easier and more complex at the same time.

You can use connect_query to execute whatever you want after connecting
to the database (not trusting the user to do that), but why would you do
that? But the database will see the IP of the pgbouncer, not the IP of
the original client. So executing the query is pointless.

You can modify pgbouncer and it should be quite simple, but you can
achieve different username/password (pgbouncer) to each customer,
different database, set pool_size for each of the connections. It won't
use IP to count connections, but the user's won't 'steal' connections
from the other.

Tomas

-- 
Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Postgresql Jobs]     [Postgresql Admin]     [Postgresql Performance]     [Linux Clusters]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Postgresql & PHP]     [Yosemite]
  Powered by Linux