What are you trying to protect against? Software failure? Hardware
failure? Both?
Depending on your budget, you could theoretically point any number of
failover nodes at a san, so long as you make sure only one of them is
running postgres at a time. Of course, you still have the single point
of failure in the SAN. If you aren't made of money and are running
linux, we've found DRBD is a great way to cluster two machines and it
avoids a few single points of failure. But you limit yourself to two or
three cluster nodes.
What are you trying to achieve with your offsite node? Is it supposed to
pick up the load if the cluster dies?
David Kerr wrote:
I'm trying to meet a very high uptime requirement in a high performance environment.
to do this we will need to have some form of cluster for our databases
What I plan on doing is:
Postgres installed on a Cluster configured in active/passive (both pointing to the same SAN
(If PG or the OS fails we trigger a failover to the passive node)
Log shipping between that cluster and a single PG Instance off site.
Is this a common/reccomended method of handling clusterin with Postgres? google searches
basically point to using a replication based solution, which i don't think would meet my
performance demands.
Does anyone have expereince with this or a similar setup that they could share with me?
Thanks
Dave
--
Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general