Re: Quorum Disk on 2 nodes out of 4?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Nov 18, 2009 at 06:32:25AM +0100, Fabio M. Di Nitto wrote:
> > Apologies if a similar question has been asked in the past, any inputs, 
> > thoughts, or pointers welcome. 
> 
> Ideally you would find a way to plug the storage into the 2 nodes that
> do not have it now, and then run qdisk on top.
> 
> At that point you can also benefit from "global" failover of the
> applications across all the nodes.
> 
> Fabio

Thanks for the reply and pointers, indeed the 4 nodes attached to storage
with qdisk sounds best... I believe in the particular scenario above, 
2 of the nodes don't have any HBA cards / attachment to storage. Maybe
an IP tiebreaker would have to be introduced if storage connections could
not be obtained and the cluster was to split into two. 

I wonder how common that type of quorum disk setup would be these days, 
I gather most would use GFS in this scenario with 4 nodes, eliminating 
the need for any specific failover of an ext3 disk mount etc., and merely
failing over the services accross all cluster nodes instead.  

Karl

--
Karl Podesta
 

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux