Recovering RAID5 array

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi all

I'm having a RAID week. It looks like 1 disk out of a
3-disk RAID5 array has failed. The array consists of
/dev/hda3 /dev/hdb3 /dev/hdc3 (all 40Gb)
I'm not sure which one is physically faulty. In an attempt
to find out, I did:
  mdadm --manage --set-faulty /dev/md0 /dev/hda3

The consequence of this was 2 disks marked faulty and no
way to get the array up again in order to use raidhotadd
to put that device back.

I'm scared of recreating superblocks and losing all my data.
So now I'm doing 'dd if=/dev/hdb3 of=/dev/hdc2' of all three
RAID partitions so that I can work on a *copy* of the data.

Then I aim to
mdadm --create /dev/md0 --raid-devices=3 --level=5 \
  --spare-devices=1 --chunk=64 --size=37111 \
  /dev/hda1 /dev/hda2 missing /dev/hdb1 /dev/hdb2

hda2 is a copy of the partition of the drive I'm currently
suspecting of failure. hdb2 is a blank partition.

I've been running Seagate's drive diagnostic software
overnight, and the old disks check out clean. This makes me
afraid that it's reiserfs corruption, not a RAID disk
failure :/

Does anyone here have any comments on what I've done so far,
or if there's anything better I can do next?

--
Jean Jordaan
http://www.upfrontsystems.co.za

-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux