Re: fence gnbd doesn't works as expected

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Oct 30, 2007 at 09:00:55AM +0100, carlopmart wrote:
> Hi all,
> 
>  I have already installed two nodes cluster using gnbd as a fence device. 
>  When tow nodes comes up at the same time all works ok, but when only I need 
> to start only one node, GFS doesn't mounts because fence device doesn't 
> works. Error is:
> 
> Mounting GFS filesystems:  /sbin/mount.gfs: lock_dlm_join: gfs_controld 
> join error: -22
> /sbin/mount.gfs: error mounting lockproto lock_dlm.
> 
> I am using a third server as GNBD server wihout serving disks. Why this 
> doesn't works?? Perhaps do I need quorum disk??

Let me see if I understand what you are doing. You want to use
fence_gnbd as your fence device, but the nodes in your cluster aren't
actually using gnbd devices for their shared storage. It this is true,
it won't work at all. All fence_gnbd guarantees is that the fenced node
will not be able to access its gnbd devices. If the GFS filesystems are
on the gnbd devices, this will keep the fenced node from being able to corrupt
them. If a GFS filesystem is not on a GNBD device, fence_gnbd does
nothing at all to protect it from corruption.

You really need quorum disk to deal with this.

-Ben
 
> My cluster.conf:
> 
> <?xml version="1.0"?>
> <cluster alias="XenDomUcluster" config_version="3" name="XenDomUcluster">
>         <fence_daemon post_fail_delay="0" post_join_delay="3"/>
>         <clusternodes>
>                 <clusternode name="node01.hpulabs.org" nodeid="1" votes="1">
>                         <fence>
>                                 <method name="1">
>                                         <device name="gnbd-fence" 
> nodename="node01.hpulabs.org"/>
>                                 </method>
>                         </fence>
>                         <multicast addr="239.192.75.55" interface="eth0"/>
>                 </clusternode>
>                 <clusternode name="node02.hpulabs.org" nodeid="2" votes="1">
>                         <fence>
>                                 <method name="1">
>                                         <device name="gnbd-fence" 
> nodename="node02.hpulabs.org"/>
>                                 </method>
>                         </fence>
>                         <multicast addr="239.192.75.55" interface="eth0"/>
>                 </clusternode>
>         </clusternodes>
>         <cman expected_votes="1" two_node="1">
>                 <multicast addr="239.192.75.55"/>
>         </cman>
>         <fencedevices>
>                 <fencedevice agent="fence_gnbd" name="gnbd-fence" 
> servers="gnbdserv.hpulabs.org"/>
>         </fencedevices>
>         <rm log_facility="local4" log_level="7">
>                 <failoverdomains>
>                         <failoverdomain name="PriCluster" ordered="1" 
> restricted="1">
>                                 <failoverdomainnode 
>                                 name="node01.hpulabs.org" priority="1"/>
>                                 <failoverdomainnode 
>                                 name="node02.hpulabs.org" priority="2"/>
>                         </failoverdomain>
>                         <failoverdomain name="SecCluster" ordered="1" 
> restricted="1">
>                                 <failoverdomainnode 
>                                 name="node02.hpulabs.org" priority="1"/>
>                                 <failoverdomainnode 
>                                 name="node01.hpulabs.org" priority="2"/>
>                         </failoverdomain>
>                 </failoverdomains>
>                 <resources>
>                         <ip address="172.25.50.11" monitor_link="1"/>
>                         <ip address="172.25.50.12" monitor_link="1"/>
>                         <ip address="172.25.50.13" monitor_link="1"/>
>                         <ip address="172.25.50.14" monitor_link="1"/>
>                         <ip address="172.25.50.15" monitor_link="1"/>
>                         <ip address="172.25.50.16" monitor_link="1"/>
>                         <ip address="172.25.50.17" monitor_link="1"/>
>                         <ip address="172.25.50.18" monitor_link="1"/>
> 			<ip address="172.25.50.19" monitor_link="1"/>
>                         <ip address="172.25.50.20" monitor_link="1"/>
>                 </resources>
>                 <service autostart="1" domain="PriCluster" name="rsync-svc" 
> recovery="relocate">
>                         <ip ref="172.25.50.11">
>                                 <script 
> file="/data/cfgcluster/etc/init.d/rsyncd" name="rsyncd"/>
>                         </ip>
>                 </service>
>                 <service autostart="1" domain="SecCluster" 
>                 name="wwwsoft-svc" recovery="relocate">
>                         <ip ref="172.25.50.12">
>                                 <script 
> file="/data/cfgcluster/etc/init.d/httpd-mirror" name="httpd-mirror"/>
>                         </ip>
>                 </service>
>                 <service autostart="1" domain="PriCluster" name="proxy-svc" 
> recovery="relocate">
>                         <ip ref="172.25.50.13">
>                                 <script 
> file="/data/cfgcluster/etc/init.d/squid" name="squid"/>
>                         </ip>
>                 </service>
>                 <service autostart="1" domain="SecCluster" name="mail-svc" 
> recovery="relocate">
>                         <ip ref="172.25.50.14">
>                                 <script 
> file="/data/cfgcluster/etc/init.d/postfix-cluster" name="postfix-cluster"/>
>                         </ip>
>                 </service>
>         </rm>
> </cluster>
> -- 
> CL Martinez
> carlopmart {at} gmail {d0t} com
> 
> --
> Linux-cluster mailing list
> Linux-cluster@xxxxxxxxxx
> https://www.redhat.com/mailman/listinfo/linux-cluster

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux