With the knowledge and design architecture of Postgres engine. It is not a redundant solution to have a shared storage for a huge database cluster. Here is why?
This case would be more worse to repair the corrupted data at page level.
Hence it doesnt have it by default.
Just sharing my thoughts. hope this helps you
Thanks & Regards,
Viggu
From: Laurenz Albe <laurenz.albe@xxxxxxxxxxx>
Sent: Friday, February 23, 2024 12:32 PM To: norbert poellmann <np@xxxxxx>; pgsql-admin@xxxxxxxxxxxxxxxxxxxx <pgsql-admin@xxxxxxxxxxxxxxxxxxxx> Subject: Re: Would you ever recommend Shared Disk Failover for HA? On Thu, 2024-02-22 at 20:34 +0100, norbert poellmann wrote:
> https://www.postgresql.org/docs/current/different-replication-solutions.html > is listing a shared disk solution for HA. > > It also mentions, "that the standby server should never access the shared storage > while the primary server is running." > > In a datacenter, where we have postgresql servers running on vmware VMs, the > shared disk configuration sounds like an appealing solution > > But [...] > > So it seems to me, getting the comfort of a single server solution, which, in a > failover, gets replaced by another single server, is paid by getting the low risk > of high damage. > > I know of the provisions of fencing, STONITH, etc. - but in practise, what is a robust solution? > > For example: How can I STONITH a node while having network problems? > Whithout reaching the host, I cannot shoot it, nor shut it. > > Would you share your opinions or practical business experiences on this topic? Back in the old days, we had special hardware devices for STONITH. Anyway, my personal experience with a shared disk setup is a bad one. Imagine two nodes, redundantly attached to disks mirrored across data centers with fibrechannel. No single point of failure, right? Well, one day one of the fibrechannel cables had intermittent failures, which led to a corrupted file system. So we ended up with a currupted file system, nicely mirrored across data centers. We had to restore the 3TB database from backup. Yours, Laurenz Albe |