Re: [Linux-cluster] unable to mount gfs partition

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Adam Manthei wrote:
On Tue, Jul 20, 2004 at 08:13:15PM +0800, chloong wrote:
  
hi all,
I managed to setup the whole gfs clustering. i have 2 nodes servers in 
this gfs cluster.

1 node is mounting the gfs partition without any issue but the other one 
not able to mount...giving me error:
#mount -t gfs /dev/pool/smsgateclu_pool0 /gfs1
mount: wrong fs type, bad option, bad superblock on 
/dev/pool/smsgateclu_pool0,
          or too many mounted file systems

can anyone facing this problem?
    

This is the standard error message that mount gives on error.  In general it
isn't very usefull.  More accurate error messages are on the console.  Post
your `dmesg` output if you are still having problems.

  
hi,
i checked the dmesg, the error is :

lock_gulm: fsid=cluster1:gfs1: Exiting gulm_mount with errors -111
GFS: can't mount proto = lock_gulm, table = cluster1:gfs1, hostdata =

where as in /var/log/messages the error is :

lock_gulm: ERROR Got a -111 trying to login to lock_gulmd.  Is it runni
ng?
lock_gulm: ERROR cm_login failed. -111
lock_gulm: ERROR Got a -111 trying to start the threads.
lock_gulm: fsid=cluster1:gfs1: Exiting gulm_mount with errors -111
GFS: can't mount proto = lock_gulm, table = cluster1:gfs1, hostdata =

i got 2 nodes in the gfs cluster. 1 is the lock_gulm server and the other one is not.
the one that not a lock_gulm server giving me mount error...

Did i need to start the lock_gulm daemon on this server that is not the lock_gulm server?

When i start the lock_gulmd on this server it gave me this error in /var/log/messages:

lock_gulmd[18399]: You are running in Standard mode.
lock_gulmd[18399]: I am (clu2.abc.com) with ip (192.168.11.212)
lock_gulmd[18399]: Forked core [18400].
lock_gulmd_core[18400]: ERROR [core_io.c:1029] Got error from reply: (clu1:192.
168.11.211) 1006:Not Allowed

my cluster.ccs :

cluster {
   name = "smsgateclu"
   lock_gulm {
     servers = ["clu1"]
     heartbeat_rate = 0.3
     allowed_misses = 1
   }
}

nodes.ccs:

nodes {
 clu1 {
  ip_interfaces {
   eth2 = "192.168.11.211"
  }
  fence {
   human {
    admin {
     ipaddr = "192.168.11.211"
    }
   }
  }
 }
 clu2 {
  ip_interfaces {
   eth2 = "192.168.11.212"
  }
  fence {
   human {
    admin {
     ipaddr = "192.168.11.212"
    }
   }
  }
 }
}

fence.ccs:

fence_devices {
 admin {
  agent = "fence_manual"
 }
}

Please help!

Thanks.

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux