Search Postgresql Archives

Re: High Availability with Postgres

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 06/20/10 10:36 AM, David Fetter wrote:
On Sun, Jun 20, 2010 at 07:34:10PM +0300, Elior Soliman wrote:
My company looking for some solution for High availability with Postgres.

Our optional solution is as follows :
Two DB servers will be using a common external storage (with raid).
Stop right there.  This is the Oracle way of doing things, and it
doesn't work for PostgreSQL.

Sure it does, as long as the database filesystem is only mounted on the currently active server, and only that instance of postgres is running. This is the traditional HA 'active/standby' server configuration. Note, I'm *not* talking about Oracle RAC active-active clustering.

This is also one of the only postgres HA configurations that won't lose /any/ committed transactions on a failure. Most all PITR/WAL replication/Slony/etc configs, the standby storage runs several seconds behind realtime.

Use a cluster manager, like Linux Heartbeat, Veritas Cluster, Sun Cluster, etc, to manage the state transitions.


on a manual failover, the active server stops postgres, frees the shared IP, umounts the shared storage, then the standby server fences the formerly active server so it can't access the storage if it accidentially tried, adopts the shared IP, mounts the shared storage, starts postgres and is online.

on a failed server failover the standby server does the same thing.

the commercial cluster software vendors insist on using dedicated connections for the heartbeat messages between the cluster members and insist on having fencing capabilities (for instance, disabling the fiber switch port of the formerly active server and enabling the port for the to-be-activated server). with linux-ha and heartbeat, you're on your own.

of course, a system like this, your external shared raid should itself be redundant, and have controller failover abilities, and each cluster server should have redundant connecctions to the two storage controllers. With fiberchannel you use two switches and two HBAs on each node. with iscsi, you'd use two ethernet switches and NICs on each host.



--
Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Postgresql Jobs]     [Postgresql Admin]     [Postgresql Performance]     [Linux Clusters]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Postgresql & PHP]     [Yosemite]
  Powered by Linux