e2fsck: Bad magic number in super-block

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I posted this to the Fedora-list, but thought I might get some additional information here as well.

I have a HD that refuses to mount with a 'bad magic number in super-block'. I'm running FedoraCore 6 x86_64.

[root@moe ~]# fdisk -l /dev/hdc

Disk /dev/hdc: 250.0 GB, 250059350016 bytes
255 heads, 63 sectors/track, 30401 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/hdc1   *           1          13      104391   83  Linux
/dev/hdc2              14        9729    78043770   8e  Linux LVM


[root@moe ~]# mount -t ext3 /dev/hdc2 /Big-Drive/
mount: wrong fs type, bad option, bad superblock on /dev/hdc2,
       missing codepage or other error
       In some cases useful info is found in syslog - try
       dmesg | tail  or so

[root@moe ~]# e2fsck -b 11239425 /dev/hdc2
e2fsck 1.39 (29-May-2006)
e2fsck: Invalid argument while trying to open /dev/hdc2

The superblock could not be read or does not describe a correct ext2
filesystem.  If the device is valid and it really contains an ext2
filesystem (and not swap or ufs or something else), then the superblock
is corrupt, and you might try running e2fsck with an alternate superblock:
    e2fsck -b 8193 <device>

[root@moe ~]# !624
mke2fs -n /dev/hdc2
mke2fs 1.39 (29-May-2006)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
9764864 inodes, 19510942 blocks
975547 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=4294967296
596 block groups
32768 blocks per group, 32768 fragments per group
16384 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
        4096000, 7962624, 11239424


I've tried 'e2fsck -b (superblock) /dev/hdc2 on all the superblocks listed above to no avail.

I've read about 'mke2fs -S' as being a possible solution, however I see that it is recommended as a last resort. Therefore I have held off on trying that method.

I'm afraid I'm toasted, however I'm still hopeful that I might recover some (or all) of my data.

Have I overlooked something?

Thanks,
Mike

--

 IBM: Insanely Better Marketing
  18:20:01 up 1 day,  4:08,  0 users,  load average: 0.12, 0.27, 0.25

 Linux Registered User #241685  http://counter.li.org

_______________________________________________
Ext3-users mailing list
Ext3-users@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/ext3-users

[Index of Archives]         [Linux RAID]     [Kernel Development]     [Red Hat Install]     [Video 4 Linux]     [Postgresql]     [Fedora]     [Gimp]     [Yosemite News]

  Powered by Linux