Re: Clarifications needed on Setting up a two node HA cluster

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



What I like to recommend then is to put the service on a virtual machine. Then make the VM itself the highly available service. The reason I prefer this is that the same setup can then be re-used for pretty much any other service on any operating system. The down side though it that recovery from a node failure takes however long it takes for the VM to reboot, which might be too long for you. I've got a tutorial for this kind of setup:

https://alteeve.com/w/2-Node_Red_Hat_KVM_Cluster_Tutorial

If you need to make the recovery faster, then you will want to make the apache service itself the HA service. The image you linked is from RHEL 5. Be sure to use RHEL 6 docs.

You might want to look at DRBD as an alternative to a SAN if you want to keep costs down. This is, effectively, "RAID 1 over a network". The idea is that your storage backing your active node is replicated to the backup node. Should the primary fail, you'd "promote" the backup node's storage, start apache and take over the floating IP address.

If you want even faster fail-over, then you can run DRBD is "dual primary mode", use GFS2 on it and have apache running on both nodes all the time. Then the only thing you need to make the highly available service is the floating IP address.

You can configure the cluster using 'luci', which can be installed on a machine outside the cluster if you would like. Personally, I recommend people work with the core /etc/cluster/cluster.conf file as it helps you understand what is happening behind the scenes better.

Happy clustering. :)

On 06/26/2012 10:13 PM, Zama Ques wrote:
Primary concern is high availability only .

------------------------------------------------------------------------
*From:* Digimer <lists@xxxxxxxxxx>
*To:* Zama Ques <queszama@xxxxxxxx>; linux clustering
<linux-cluster@xxxxxxxxxx>
*Sent:* Tuesday, 26 June 2012 10:52 PM
*Subject:* Re:  Clarifications needed on Setting up a two
node HA cluster

Is your primary concern load balancing or high availability?

On 06/26/2012 10:36 AM, Zama Ques wrote:
 > Hi ,
 >
 > I need to setup a two node HA cluster on top of HP blade servers using
 > Redhat Cluster Solution.  I have started going through the docs and have
 > the following  doubts as of now.
 >
 > I am planning to build my two node setup based on the following
 > architecture as shown in Fig 1.1 in
 > http://www.centos.org/docs/5/pdf/Cluster_Administration.pdf
 >
 >
 >
 > As per t'he above fig, I am planning to build my setup as per the
 > following .
 >
 > 1) Each of the nodes will have two interfaces. One interface say eth0 on
 > both the nodes will be assigned private addresses and
 >      will be connected to a switch which will be used for cluster
 > traffic.The other interface on both the nodes say eth1 will be
 >      assigned public ip address and will be connected to another switch
 > which is connected to internet via a firewall/router . I
 >      will also be assigning a virtual public ip address to the cluster
 > by configuring a ip resource in conga. This ip address I
 >    will add it to  Listen directive in apache configuration file  so
 > that apache listens on this ip address only to serve client
 >      requests. This ip address will also resolve to a registered domain
 > name for our portal which we are going to serve by this
 >      setup.
 >
 >    And as a prerequisite for conga setup , I will update /etc/hosts on
 > both nodes by supplying FQDNs
 >    corresponding to private ip address assigned earlier on eth0 on both
 > the nodes.
 >
 > 2) Regarding storage , I am not sure as of now what kind of storage
 > device will be used . If it is not a SAN storage
 > than I will  configure one of the partitions on the storage as iscsi
 > target and will share it to both the cluster nodes.
 >  From both the cluster nodes , I will create volumes using clvm and will
 > use GFS on top of it as file system .
 >
 > 3)Regarding cluster resource , we will be using apache as one of the
 > resource to serve http traffic. Will configure apache
 > using Conga and after configuration is done , will copy the httpd config
 > file manually to the other cluster node . Will not
 > start apache service on both cluster nodes and will leave it to cluster
 > software to start services . Will also do chkconfig
 > httpd off on both the nodes and will also not update /etc/fstab with the
 > GFS file system leaving it to cluster node to handle
 > mounting of a file system.
 >
 > Sorry for writing too lenghthy  , but want to clear my doubts before
 > starting up.
 >
 > Will be very much grateful if members can read my long mail with
 > patience and reply back whether I am going in the right direction.
 >
 >
 > Thanks in Advance
 > Zaman
 >
 >
 >
 >
 >
 >
 >
 > --
 > Linux-cluster mailing list
 > Linux-cluster@xxxxxxxxxx <mailto:Linux-cluster@xxxxxxxxxx>
 > https://www.redhat.com/mailman/listinfo/linux-cluster
 >


--
Digimer
Papers and Projects: https://alteeve.com <https://alteeve.com/>






--
Digimer
Papers and Projects: https://alteeve.com


--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster


[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux