Hi Bob, Thank you very much for your interest. No problem at all to send you the requested information. Anything to help the open source community.... (And as side effect, maybe get my data back :) ) I am sending you an off-list e-mail with download details. Thank you. Theophanis Kontogiannis -----Original Message----- From: linux-cluster-bounces@xxxxxxxxxx [mailto:linux-cluster-bounces@xxxxxxxxxx] On Behalf Of Bob Peterson Sent: Thursday, July 24, 2008 4:51 PM To: linux clustering Subject: Re: Journal 0 locked on GFS2? gfs2_fsck gives no results! On Thu, 2008-07-24 at 15:44 +0300, Theophanis Kontogiannis wrote: > GFS2: fsid=tweety:gfs2-00.0: fatal: filesystem consistency error > > GFS2: fsid=tweety:gfs2-00.0: inode = 4 25 > > GFS2: fsid=tweety:gfs2-00.0: function = jhead_scan, file = > fs/gfs2/recovery.c, line = 239 Hi Theophanis, I haven't seen this error before. It indicates a bad entry in the first journal. The gfs2_fsck program rejected it for the same reason that the GFS2 file system rejected it. I've been doing a lot of work on gfs2_fsck this week, so it would be an interesting for me to get a copy of your file system metadata (not any of the data) and run it through my latest fsck on one of my test systems. I'd also kind of like to examine the journal to see what's wrong with it and possibly give gfs2_fsck the ability to repair the damage. I can't make any promises though. If you're interested in doing this, run this command: gfs2_edit savemeta /dev/vg0/data0 /tmp/theophanis.metadata bzip2 /tmp/theophanis.metadata Then put the resulting .bz2 file on a server where I can get it. You can try this command on the pre-existing gfs2_edit program, but it might not save all of the metadata I need. I don't know how "up to date" Centos is in regards to gfs2_edit. You can also download the latest cluster git tree from source code, compile it, and run the latest version to make sure I get everything. If you're not willing to send me your metadata, you could run this command and email the output: gfs2_edit -p journal0 /dev/vg0/data0 > /tmp/journal0.txt Then I could at least try to determine what's wrong with the bad journal. Regards, Bob Peterson Red Hat Clustering & GFS -- Linux-cluster mailing list Linux-cluster@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/linux-cluster -- Linux-cluster mailing list Linux-cluster@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/linux-cluster