On Wed, Dec 04, 2002 at 10:00:52AM -0500, Theodore Ts'o wrote: > > Even clearing the has_journal and needs_recovery flags produced the same > > output using fsck as above. > > The exact same messages? Including an error about reading the journal > superblock? Are you sure about this? That doesn't make any sense at > all.... See: debugfs: open -f -w /dev/hdb2 debugfs: features -has_journal -needs_recovery Filesystem features: filetype sparse_super debugfs: quit [/home/lynx] root@wuehlkiste:# fsck -f /dev/hdb2 fsck 1.27 (8-Mar-2002) e2fsck 1.27 (8-Mar-2002) /dev/hdb2: Invalid argument while reading block 16712447 /dev/hdb2: Invalid argument reading journal superblock fsck.ext2: Invalid argument while checking ext3 journal for /dev/hdb2 I was confused as well since I thought this would bring me back to ext2 in some way. > 1) Run "debugfs /dev/hdb2", and then type command "stat <8>", and send > me the output. That would be useful to see what's going on with the > journal inode. Here it is: root@wuehlkiste:# debugfs /dev/hdb2 debugfs 1.27 (8-Mar-2002) debugfs: stat <8> Inode: 8 Type: regular Mode: 0777 Flags: 0xff00ff Generation: 16711935 User: 255 Group: 255 Size: 71777214328144127 File ACL: 16711935 Directory ACL: 0 Links: 255 Blockcount: 16711935 Fragment: Address: 16711935 Number: -1 Size: 0 ctime: 0x3dffe6ff -- Wed Dec 18 04:09:51 2002 atime: 0x00ff00ff -- Mon Jul 13 11:12:15 1970 mtime: 0x3dffe6ff -- Wed Dec 18 04:09:51 2002 dtime: 0x00ff00ff -- Mon Jul 13 11:12:15 1970 BLOCKS: (0):16712447, (1):16712447, (2):16712447, (3):16712447, (4):16712447, (5):16712447, (6):16712447, (7):16712447, (8):16712447, (9):16712447, (10):16712447, (11):16712447, (IND):16712447, (DIND):16713471, (TIND):16711935 TOTAL: 15 > 2) If the journal inode is completely trashed, you can try running > "debugfs -w /dev/hdb2", and then use the command "clri <8>". That > will completely blow away the journal inode. It shouldn't be > necessary if you've cleared the has_journal and needs_recovery > journal, however. > > 3) Before you do any of this, if you have the disk space, it would be > useful if could somehow see the output of "e2image -r /dev/hda2 - | > bzip2 > hda2.img.bz2", for forensic purposes. It will produce a > somewhat largish file, and getting that uploaded might be a problem, > but it would be useful to see exactly what's going on. Before I do anything like writing to the fs I'd just like to check I'm doing things right, so here is what I did so far: The partition that REALLY crashed is /dev/hdb1 which is 2 GB. Moving some data freed /dev/hdb2 (2,5 GB) for 'backup' so I did a 'dd if=/dev/hdb1 of=/dev/hdb2 bs=1024 conv=sync' (BTW: Does the bs of dd has something to do with the blocksize of the fs - which is 4096 - don't know about this) So /dev/hdb1 is still 'virgin' concerning the error state (I hope!) and all experimental stuff I did on /dev/hdb2 (like e2salvage or trying to mount it as ext2). Still having the originally crashed partition do I need the Imagefile of e2image or could I skip this since diskspace has now become rare on that machine. > Finally, upgrading to a newer version of e2fsprogs might help, > although in this particular case, I don't think it will; the journal > support code hasn't changed much in recent releases. I already compiled 1.32 and at least the static fsck behaved as above. Regards -- Stephan Wiehr http://www.asta.uni-sb.de/~lynx/ "Always remember: You're unique, just like everyone else." _______________________________________________ Ext3-users@redhat.com https://listman.redhat.com/mailman/listinfo/ext3-users