How to replace faulty disk in RAID5 setup?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

This question came up in another thread, but buried at the end so I thought it would be worth pulling out and asking explicitly.

I have a 6-disk RAID5 array made up of 6 x 250GB Maxtor SATA drives (5 + 1 hot spare)

Suppose one fails. What is the process I need to follow to replace the faulty disk?

Here's my best guess so far:

(assume /dev/sdc has failed).

Shutdown server.
Pull dead drive
Insert new drive
Boot up server
Create partition table on new drive (all my drives are partitioned identically):
  # sfdisk -d /dev/sda | sfdisk /dev/sdc

(Is it necessary to explicitly "remove" the failed device from the
arrays (before shutting down?) and to add it back in after replacing the
disk?)

For example, would this work?:

# mdadm /dev/md5 -f /dev/sdc2 -r /dev/sdc2 -a /dev/sdc2

According to my understanding, this does the following:

1. Marks /dev/sdc2 as faulty in /dev/md5 (if the drive has failed it should
already be marked faulty??)
2. removes /dev/sdc2 from /dev/md5
3. Adds /dev/sdc2 to /dev/md5

Am I missing any important steps?

R.
-- 
http://robinbowes.com

-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux