Search Postgresql Archives

Re: Question: Multiple pg clusters on one server can be reached with the standard port.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



16. Juni 2023 17:59, "Ron" <ronljohnsonjr@xxxxxxxxx> schrieb:

> On 6/16/23 10:54, Brainmue wrote:
> 
>> 16. Juni 2023 17:41, "Ron" <ronljohnsonjr@xxxxxxxxx> schrieb:
> 
> On 6/16/23 10:18, Laurenz Albe wrote:
>> On Fri, 2023-06-16 at 14:49 +0000, Brainmue wrote:
> 
> 16. Juni 2023 14:50, "Laurenz Albe" <laurenz.albe@xxxxxxxxxxx> schrieb:
>> On Fri, 2023-06-16 at 12:35 +0000, Brainmue wrote:
>> 
>> We want to minimise dependencies between the application and the associated PostgreSQL DB.
>> The idea is that the application gets its DB alias and this is then used as a connection string.
>> This way we can decide in the backend on which server the PostgreSQL DB is running.
>> There is an existing solution for that: the libpq connection service file:
>> https://www.postgresql.org/docs/current/libpq-pgservice.html
>> 
>> If you want to manage the connection strings centrally, you can use LDAP lookup:
>> https://www.postgresql.org/docs/current/libpq-ldap.html
> 
> Thank you, I already know this solution, but the LDAP solution is out of the question for us and
> the file again means an intervention on the client. And that's exactly what we don't want.
>> Okay.
>> 
>> Then why don't you go with your original solution, but use a unique TCP port number
>> for each database? There are enough port numbers available. That way, there is no
>> collision and no need for a proxy to map port numbers.
> 
> In practice, that gets very complicated is large organizations: every time you add another
> database, you must file another request with the CISO RISK office to get yet another non-standard
> port open from dozens of machines, and the network team implement them.
> 
> Operationally much simpler to have a listener handle that.
> 
> -- Born in Arizona, moved to Babylonia.
>> Hello Ron,
>> 
>> I have to agree with you there as well. The workflow you have to go through is also often a time
>> issue.
>> There are many places that have to agree and then application owners still have to provide
>> justifications.
>> At the same time, we have to be flexible and fast and allocate the resources well at any time and
>> provide the application with the maximum possible performance.
> 
> There's always The Cloud... spinning up a new AWS RDS Postgresql is fast and simple. (Costly,
> though.)
> 
> -- Born in Arizona, moved to Babylonia.

We know that too, but our data should/must currently remain in-house on our own hardware.
That is why we need a solution at our company.

Regards
Michael






[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Postgresql Jobs]     [Postgresql Admin]     [Postgresql Performance]     [Linux Clusters]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Databases]     [Postgresql & PHP]     [Yosemite]

  Powered by Linux