Re: Starter Cluster / GFS

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 10-11-11 04:59 AM, Jankowski, Chris wrote:
> Gordan,
> 
> I do understand the mechanism.  I was trying to gently point out that this behaviour is unacceptable for my commercial IP customers. The customers buy clusters for high availability. Loosing the whole cluster due to single component failure - hearbeat link is not acceptable. The heartbeat link is a huge SPOF. And the cluster design does not support redundant links for heartbeat.
> 
> Also, none of the commercially available UNIX clusters or Linux clusters (HP ServiceGuard, Veritas, SteelEye) would display this type of behaviour and they do not clobber cluster filesystems.  So, it is possible to achieve acceptable reaction to this type of failure.
> 
> Regards,
> 
> Chris Jankowski

I can't speak to heartbeat, but under RHCS you can have multiple fence
methods and devices, and they will used in the order that they are found
in the configuration file.

With the power-based devices I've used (again, just IPMI and NA), the
poweroff call is more or less instant. I've not seen, personally, a lag
exceeding a second with these devices. I would consider a fence device
that does not disable a node in <1 second to be flawed.

-- 
Digimer
E-Mail: digimer@xxxxxxxxxxx
AN!Whitepapers: http://alteeve.com
Node Assassin:  http://nodeassassin.org

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster


[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux