One of my cluster nodes crashed the other day, when I brought it back up I got a the error:
GFS: Trying to join cluster "lock_dlm", "oss:mydisk" GFS: fsid=oss:mydisk.0: Joined cluster. Now mounting FS... GFS: fsid=oss:mydisk.0: jid=0: Trying to acquire journal lock... GFS: fsid=oss:mydisk.0: jid=0: Looking at journal... attempt to access beyond end of device sdb: rw=0, want=19149432840, limit=858673152 GFS: fsid=oss:mydisk.0: fatal: I/O error I tried to run the gfs_fsck and get a Segmentation fault. So, I upgraded the cluster software (latest RHEL4 tag), compile and get: # gfs_fsck -V GFS fsck DEVEL.1211222576 (built May 19 2008 15:05:16) Copyright (C) Red Hat, Inc. 2004-2005 All rights reserved. [root@sproc cluster]# gfs_fsck -vv /dev/sdb Initializing fsck Initializing lists... (bio.c:140) Writing to 65536 - 16 4096 Initializing special inodes... (file.c:45) readi: Offset (400) is >= the file size (400). (super.c:226) 5 journals found. Validating Resource Group index. Level 1 check. Segmentation faultWhich is a little further (it didn't do the Level 1 check) than I got last time, but still bails on me.
not being a GFS pro here, and a little gfs_tool list.. work, the volume seems to be there, just feels like the server crash damaged some important bits along the way.
The data on this drive isn't that critical, just looking to see if i'm missing something dumb, or verification that the partition is hosed (or just not worth trying to really recover the 400 gigs of data at this point).
If this should go to the devel list, please let me know. -- Wes Young Network Security Analyst CIT - University at Buffalo ----------------------------------------------- | my OpenID: | http://tinyurl.com/2zu2d3 | -----------------------------------------------
Attachment:
smime.p7s
Description: S/MIME cryptographic signature
-- Linux-cluster mailing list Linux-cluster@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/linux-cluster