Problem with 2 node cluster and GFS2

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Title: Problem with 2 node cluster and GFS2

        Hello all,

Any ideas on what the following is?

GFS2: fsid=: Trying to join cluster "lock_dlm", "tweety:gfs0"

GFS2: fsid=tweety:gfs0.0: Joined cluster. Now mounting FS...

GFS2: fsid=tweety:gfs0.0: jid=0, already locked for use

GFS2: fsid=tweety:gfs0.0: jid=0: Looking at journal...

GFS2: fsid=tweety:gfs0.0: jid=0: Acquiring the transaction lock...

GFS2: fsid=tweety:gfs0.0: jid=0: Replaying journal...

GFS2: fsid=tweety:gfs0.0: jid=0: Replayed 4 of 4 blocks

GFS2: fsid=tweety:gfs0.0: jid=0: Found 0 revoke tags

GFS2: fsid=tweety:gfs0.0: jid=0: Journal replayed in 1s

GFS2: fsid=tweety:gfs0.0: jid=0: Done

GFS2: fsid=tweety:gfs0.0: jid=1: Trying to acquire journal lock...

GFS2: fsid=tweety:gfs0.0: jid=1: Looking at journal...

GFS2: fsid=tweety:gfs0.0: jid=1: Done

GFS2: fsid=tweety:gfs0.0: jid=2: Trying to acquire journal lock...

GFS2: fsid=tweety:gfs0.0: jid=2: Looking at journal...

GFS2: fsid=tweety:gfs0.0: jid=2: Done

GFS2: fsid=tweety:gfs0.0: jid=3: Trying to acquire journal lock...

GFS2: fsid=tweety:gfs0.0: jid=3: Looking at journal...

GFS2: fsid=tweety:gfs0.0: jid=3: Done

GFS2: fsid=tweety:gfs0.0: fatal: invalid metadata block

GFS2: fsid=tweety:gfs0.0:   bh = 162602 (magic number)

GFS2: fsid=tweety:gfs0.0:   function = gfs2_meta_indirect_buffer, file = fs/gfs2/meta_io.c, line = 438

GFS2: fsid=tweety:gfs0.0: about to withdraw this file system

GFS2: fsid=tweety:gfs0.0: telling LM to withdraw

GFS2: fsid=tweety:gfs0.0: withdrawn

Call Trace:

 [<ffffffff8865215e>] :gfs2:gfs2_lm_withdraw+0xc1/0xd0

 [<ffffffff80014cca>] sync_buffer+0x0/0x3f

 [<ffffffff80062a3f>] out_of_line_wait_on_bit+0x6c/0x78

 [<ffffffff8009ba34>] wake_bit_function+0x0/0x23

 [<ffffffff8866395b>] :gfs2:gfs2_meta_check_ii+0x2c/0x38

 [<ffffffff88655dbb>] :gfs2:gfs2_meta_indirect_buffer+0x1e3/0x284

 [<ffffffff88650a88>] :gfs2:gfs2_inode_refresh+0x22/0x2b9

 [<ffffffff8864feff>] :gfs2:inode_go_lock+0x29/0x57

 [<ffffffff8864f08f>] :gfs2:glock_wait_internal+0x1e3/0x259

 [<ffffffff8864f2b3>] :gfs2:gfs2_glock_nq+0x1ae/0x1d4

 [<ffffffff88659e35>] :gfs2:gfs2_getattr+0x7d/0xc3

 [<ffffffff88659e2d>] :gfs2:gfs2_getattr+0x75/0xc3

 [<ffffffff8000de0b>] vfs_getattr+0x2d/0xa9

 [<ffffffff8003e6ff>] vfs_lstat_fd+0x2f/0x47

 [<ffffffff8002a6ab>] sys_newlstat+0x19/0x31

 [<ffffffff8005c229>] tracesys+0x71/0xe0

 [<ffffffff8005c28d>] tracesys+0xd5/0xe0



I see it on both nodes when I try to access a particular folder from either node

Thank you all

Theophanis Kontogiannis

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux