On 1/26/07, Hagmann, Michael <Michael.Hagmann@xxxxxxxxx> wrote:
As suggested, we have gone for the above arrangement. Have also made provision for quorum disks, if required. Initially, I put everything on a single network and now I am trying to split it into two different networks, as described in the previous email.
The storage vendor, HP, had suggested making a software raid using the multipath option to achieve the desired functionality and we followed it. There seemed to be some conflict with the device-mapper tools and the md raid devices were being stopped after initialisation but making a new initrd image helped and solved the problem. Thanks for the link the the presentation but I could not find the English translation.
I am forced to go ahead with two nodes only. Will update the forum once everything is in place and working.Thank you very much for your suggestions.
Hiwhat I can recommend ( in short ) is a RHEL4 U4+ / GFS Cluster. When you mount the same File system ( in the same time ) on more than one Node you need a Clusterfilesystem ( like GFS or maybe ocfs2 )RHEL4 U4 / GFS with DLM and Quorumdisk ( when you only have 2 nodes ) also very Important is the fencing method ( we use now the iLO interface from our HP Servers ). And for the Cluster interconnect I recommend you a separate Network.
As suggested, we have gone for the above arrangement. Have also made provision for quorum disks, if required. Initially, I put everything on a single network and now I am trying to split it into two different networks, as described in the previous email.
For the Multipath connection you can use the device-mapper multipath tools ( comes with RHEL4 U4 ) or you use the Vendor specific Driver, like the Qlogic Driver from HP in our Case.
The storage vendor, HP, had suggested making a software raid using the multipath option to achieve the desired functionality and we followed it. There seemed to be some conflict with the device-mapper tools and the md raid devices were being stopped after initialisation but making a new initrd image helped and solved the problem. Thanks for the link the the presentation but I could not find the English translation.
Also you should always use a odd number of member (like
3,5,7,...), because the fencing is then better. But when you have a real HA
Solution, in the most of the Time you have also Two Datacenters. And then the
Cluster should also work when one Datacenter is not available. Then you need
either a new Datacenter ;-), for the third member or you fail back to the
Problem with the fencing! And then maybe the quorum disk is the best
solution.
I am forced to go ahead with two nodes only. Will update the forum once everything is in place and working.Thank you very much for your suggestions.
-- Linux-cluster mailing list Linux-cluster@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/linux-cluster