Starting a raid5 when devices have changed and having replaced a failed disk

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



This may be a repeat to that same post a few days ago, but I had a raid5 currently running in degraded mode on /dev/hdb1,/dev/hdc1,/dev/hdd1,/dev/hde1,/dev/hdf1.  hdb went bad, had been running in degraded mode.  I replaced the disk, but all of my device names have changed because i moved to one disk per ide bus.  I have been trying to use mdadm.

Should I be trying to build a new array?  As in, mdadm --build /dev/md0 --raid-devices=5 /dev/hde1 /dev/hdg1 /dev/hdi1 /dev/hdk1 /dev/hdm1?  The disk hde is the one I replaced.  Thanks


-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux