Hello all I have a two node cluster on 5.2 with
2.6.18-92.1.6.el5.centos.plus running for some time with drbd 8.2 I also have gfs2-utils-0.1.44-1.el5_2.1 Suddenly when trying to mount, I started
getting for my gfs2 (running on LV, over VG, over PV over DRBD): GFS2: fsid=: Trying to join cluster
"lock_dlm", "tweety:gfs2-00" GFS2: fsid=tweety:gfs2-00.0: Joined
cluster. Now mounting FS... GFS2: fsid=tweety:gfs2-00.0: jid=0, already
locked for use GFS2: fsid=tweety:gfs2-00.0: jid=0: Looking
at journal... GFS2: fsid=tweety:gfs2-00.0: fatal:
filesystem consistency error GFS2: fsid=tweety:gfs2-00.0:
inode = 4 25 GFS2: fsid=tweety:gfs2-00.0:
function = jhead_scan, file = fs/gfs2/recovery.c, line = 239 GFS2: fsid=tweety:gfs2-00.0: about to
withdraw this file system GFS2: fsid=tweety:gfs2-00.0: telling LM to
withdraw dlm: closing connection to node 2 Trying to
mount again the fs I get the same error. Trying to gfs2_fsck –vy
/dev/mapper/vg0-data0 gives: Initializing fsck Initializing lists... Recovering journals (this may take a
while)jid=0: Looking at journal... jid=0: Failed jid=1: Looking at journal... jid=1: Journal is clean. jid=2: Looking at journal... jid=2: Journal is clean. jid=3: Looking at journal... jid=3: Journal is clean. jid=4: Looking at journal... jid=4: Journal is clean. jid=5: Looking at journal... jid=5: Journal is clean. jid=6: Looking at journal... jid=6: Journal is clean. jid=7: Looking at journal... jid=7: Journal is clean. jid=8: Looking at journal... jid=8: Journal is clean. jid=9: Looking at journal... jid=9: Journal is clean. Journal recovery complete. Initializing special inodes... Validating Resource Group index. Level 1 RG check. (level 1 passed) 1392 resource groups found. Setting block ranges... Starting pass1 Checking metadata in Resource Group #0 Checking metadata in Resource Group #1 Checking metadata in Resource Group #2 ................... Checking metadata in Resource Group #1391 Pass1 complete Checking system inode 'master' System inode for 'master' is located at
block 23 (0x17) Checking system inode 'root' System inode for 'root' is located at block
22 (0x16) Checking system inode 'inum' System inode for 'inum' is located at block
330990 (0x50cee) Checking system inode 'statfs' System inode for 'statfs' is located at
block 330991 (0x50cef) Checking system inode 'jindex' System inode for 'jindex' is located at
block 24 (0x18) Checking system inode 'rindex' System inode for 'rindex' is located at
block 330992 (0x50cf0) Checking system inode 'quota' System inode for 'quota' is located at
block 331026 (0x50d12) Checking system inode 'per_node' System inode for 'per_node' is located at
block 328392 (0x502c8) Starting pass1b Looking for duplicate blocks... No duplicate blocks found Pass1b complete Starting pass1c Looking for inodes containing ea blocks... Pass1c complete Starting pass2 Checking system directory inode 'jindex' Checking system directory inode 'per_node' Checking system directory inode 'master' Checking system directory inode 'root' Checking directory inodes. Pass2 complete Starting pass3 Marking root inode connected Marking master directory inode connected Checking directory linkage. Pass3 complete Starting pass4 Checking inode reference counts. Pass4 complete Starting pass5 Verifying Resource Group #0 Verifying Resource Group #1 Verifying Resource Group #2 Verifying Resource Group #3 Verifying Resource Group #4 .............. Verifying Resource Group #1388 Verifying Resource Group #1389 Verifying Resource Group #1390 Verifying Resource Group #1391 Pass5 complete Writing changes to disk Syncing the device. Freeing buffers. gfs2_fsck complete My questions are: What it really means for GFS2 to have journal
0 locked. How to get out of this situation and make
the fs mountable again? Should I try to write some garbage on
journal 0 with gfs2_edit so to force gfs2_fsck to recover it? Thank you all for your time. Theophanis Kontogiannis |
-- Linux-cluster mailing list Linux-cluster@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/linux-cluster