Re: raid 1 recovery steps

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 9/24/06, chapman <chris@xxxxxxxxxxxxx> wrote:
Can I assume the disk is ok, just needs to be
re-added to the array?

Not necessarily. You should look for logs indicating _why_ it is
marked bad. If you don't know how long it's been broken, you need some
monitoring system, like mdadm or logcheck.

This is most commonly caused by bad sectors. If you can re-add and
resync goes through without complaining, you're probably ok.

I'm assuming I need to first remove sda1 from the raid then re-add it,
correct?  If so, what are the specific steps?

mdadm /dev/md0 -r /dev/sda1
mdadm /dev/md0 -a /dev/sda1

Can this be done safely on a
live server without pulling the system down?

sure.

How will this affect rebooting
once completed - if at all?

Should not have any impact. At least if boot setup is ok.

Any gotchas I should look out for?

Things get tricky if the other disk has stealthily become bad.

- tuomas
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux