Re: Two nodes cluster issue without sharedstorageissue

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks Jeff, I share the same reasons.


Jeff Sturm wrote:
Certainly.  That third node need not run any cluster services at all other than fencing, and yet would guarantee a quorum in the even of loss of any single node.
 
A quorum disk would theoretically solve this as well, but for reasons I can't quite articulate I suspect the three-node cluster is superior.  (Besides, we have stockpiles of cheap hardware where I'm at, so there's little reason for us not to do it.)

From: linux-cluster-bounces@xxxxxxxxxx [mailto:linux-cluster-bounces@xxxxxxxxxx] On Behalf Of Rodrique Heron
Sent: Friday, October 24, 2008 12:00 PM
To: linux clustering
Subject: Re: Two nodes cluster issue without sharedstorageissue

Jeff

I have two node cluster only because my storage array only supports two nodes, can I add a third node without it having access to the storage? I am using CLVM to run domU's.



Jeff Sturm wrote:

For what it's worth, considerations like these have caused us to abandon any efforts to build a 2-node cluster.

>From this point forward all our RHCS deployments will have a minimum of 3 nodes, even if the 3rd node is a small node that provides no resources and only exists for arbitration purposes.  (It was going to be that, or a quorum disk for our application, but we have no experience running a quorum disk over the long-haul in a production envrironment.)

Hope this helps someone.

> -----Original Message-----
> From: linux-cluster-bounces@xxxxxxxxxx
> [mailto:linux-cluster-bounces@xxxxxxxxxx] On Behalf Of Chen,
> Mockey (NSN - CN/Cheng Du)
> Sent: Thursday, October 23, 2008 10:36 PM
> To: linux clustering
> Subject: RE: Two nodes cluster issue without
> sharedstorageissue
>

>
> >-----Original Message-----
> >From: linux-cluster-bounces@xxxxxxxxxx
> >[mailto:linux-cluster-bounces@xxxxxxxxxx] On Behalf Of ext Lon
> >Hohberger
> >Sent: 2008年10月24日 0:02
> >To: linux clustering
> >Subject: Re: Two nodes cluster issue without shared
> >storageissue
> >
> >On Thu, 2008-10-16 at 17:10 +0800, Chen, Mockey (NSN - CN/Cheng Du)
> >wrote:
> >> Hi,
> >>
> >> I want to set up a two node cluster, I use active/standby
> >mode to run
> >> my service. I need even one node's hardware failure such as
> >power cut,
> >> another node still can handover from failure node and the
> >provide the
> >> service.
> >>
> >> In my environment, I have no shared storage, so I can not
> use quorum
> >> disk. Is there any other way to implement it? I searched and found
> >> 'tiebreaker IP' may feed my request, but I can not found any
> >hints on
> >> how to configure it ?
> >
> >Since you have no shared data, you may be able to run
> without fencing.
> >
> >That should be pretty straightforward, but you might need to comment
> >out the "fenced" startup from the cman init script.
> >
> >In this case, the worst that will happen is both nodes will end up
> >running the service at the same time in the event of a network
> >partition.
> >
> >The other down side is that if the cluster divides into two
> partitions
> >and later merges back into one partition, I don't think
> certain things
> >will work right; you will need to detect this event and
> reboot one of
> >the nodes.
> >
> >-- Lon
>
> I know such defects in two node cluster. 
> Since our service is mission critical, I want to know how to
> avoid such failure case ?
>
> Thanks.
>
>
>
> --
> Linux-cluster mailing list
> Linux-cluster@xxxxxxxxxx
> https://www.redhat.com/mailman/listinfo/linux-cluster
>
>

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster


-- 
Rodrique Heron 

    

-- 
Rodrique Heron 
Systems Administrator/
Red Hat Certified Engineer
Baruch College 
1 Bernard Baruch Way,
Box H-0910
New York, NY 10010
Phone: (646) 312-1055 
begin:vcard
fn:Rodrique  Heron
n:Heron;Rodrique 
org:Baruch College;Network Services
adr:;;151 East 25 Street;New York;NY;10010;USA
email;internet:rodrique.heron@xxxxxxxxxxxxxxx
title:Systems Administrator
tel;work:646-312-1055
note:Red Hat Certified Engineer
x-mozilla-html:FALSE
url:www.baruch.cuny.edu
version:2.1
end:vcard

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux