RE: GFS Mounting Issues

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I pretty sure you need to be running fenced and clvmd as well to get this to work, there was a message relating to this in your
original post.

/sbin/mount.gfs: node not a member of the default fence domain
/sbin/mount.gfs: error mounting lockproto lock_dlm

You should see something like this in the output from cman_tool services.

type             level name       id       state       
fence            0     default    00010001 none        
[1 2]
dlm              1     clvmd      00020001 none        
[1 2]
dlm              1     rgmanager  00030001 none        
[1 2]

The fence domain will need to be configured correctly in your cluster.conf file and I believe will start automatically when you
start cman.  There will probably be some errors in your log stating the fence domain couldn't start up when you started cman.

Ben


> -----Original Message-----
> From: linux-cluster-bounces@xxxxxxxxxx [mailto:linux-cluster-bounces@xxxxxxxxxx] On Behalf Of Caron,
> Chris
> Sent: 31 July 2008 18:30
> To: linux clustering
> Subject: RE:  GFS Mounting Issues
> 
> Bob,
> 
> Thank you for replying; I should have included more information.  I was
> going by the bases people assumed a valid cluster was running (but we
> should never assume that right? :) ).  After your email I ran a few
> status tools to report more information in hopes may have helped guide
> anyone to an answer.  Had you not sent your email, I wouldn't have
> uncovered the very odd one at the bottom of this email.
> 
> [root@node01 ~]# service cman status
> cman is running.
> 
> [root@node01 ~]# clustat
> Cluster Status for rhc1 @ Thu Jul 31 13:21:35 2008
> Member Status: Quorate
> 
>  Member Name                                   ID   Status
>  ------ ----                                   ---- ------
>  node01.rhc1                                      1 Online, Local
>  node02.rhc1                                      2 Online
>  node03.rhc1                                      3 Online
>  node04.rhc1                                      4 Online
>  node05.rhc1                                      5 Offline
> 
> (Note: I tailored the above output so it wouldn't wrap)
> 
> [root@node01 ~]# service rgmanager status
> clurgmgrd (pid 13235) is running...
> 
> [root@node01 ~]# cman_tool status
> Version: 6.1.0
> Config Version: 8
> Cluster Name: rhc1
> Cluster Id: 1575
> Cluster Member: Yes
> Cluster Generation: 36
> Membership state: Cluster-Member
> Nodes: 4
> Expected votes: 5
> Total votes: 4
> Quorum: 3
> Active subsystems: 8
> Flags: Dirty
> Ports Bound: 0 177
> Node name: node01.rhc1
> Node ID: 1
> Multicast addresses: <not important>
> Node addresses: <not important>
> 
> This one concerns me :
> [root@node01 ~]# cman_tool services
> type             level name       id       state
> dlm              1     rgmanager  00010002 FAIL_ALL_STOPPED
> [1 2 3]
> 
> Chris Caron
> 
> 
> 
> 
> --
> Linux-cluster mailing list
> Linux-cluster@xxxxxxxxxx
> https://www.redhat.com/mailman/listinfo/linux-cluster




--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux