Re: degraded raid array with bad blocks

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, 16 Jul 2015 20:14:21 +0200
Fabian Fischer <raid@xxxxxxxxxxxxxxxxx> wrote:

> After booting, the removed disk wasn't re added to the array (maybe
> because of different event count). --re-add doesn't work.
> So I used --add.

As to why --re-add didn't work, I *just* had the same situation, maybe you
needed to do 'mdadm --remove /dev/md127 faulty' first.

> Because of the bad blocks on one of the remaining disks, the rebuild
> stops when reaching the first bad block. The re added disk is declared
> as spare, 2 disks active and the disk with bad blocks as faulty.

One course of action is to use dd_rescue to clone the disk with bad blocks to
a new clean disk (skipping the bad blocks as you go -- you will lose some
data), then assemble the array with the new disk in place of the cloned one and
proceed with trying to rebuild. This  time it will not have bad blocks, but
will have just zeroes at those locations, so rebuild should complete
successfully. After the rebuild completes you should fsck the filesystem and
check file checksums (if you saved them), to figure out where the damage
actually landed, and restore those files from backup.

-- 
With respect,
Roman

Attachment: signature.asc
Description: PGP signature


[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux