identifying failed disk/s in an array.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

I have just built a Raid 5 array  using mdadm and while it is running fine I have a question, about identifying the order of disks in the array.

In the pre sata days you would connect your drives as follows:

Primary Master - HDA
Primary Slave - HDB
Secondary - Master - HDC
Secondary - Slave -HDD

So if disk HDC failed i would know it was the primary disk on the secondary controller and would replace that drive.

My current setup is as follows

MB Primary Master (PATA) Primary Master - Operating System

The array disks are attached to:

MB Sata port 1 
MB Sata port 2
PCI card Sata port 1

When i setup the array the OS drive was SDA and the other SDB,SDC,SDD.

Now the problem is everytime i reboot, the drives are sometimes detected in a different order, now because i mount root via the UUID of the OS disk and the kernel looks at the superblocks of the raided drives everything comes up fine, but I'm worried that if i move the array to another machine and need to do a mdadm --assemble that i won't know the correct order of the disks and what is more worrying if i have a disk fail say HDC for example, i wont know which disk HDC is as it could be any of the 5 disks in the PC. Is there anyway to make it easier to identify which disk is which?.

thanks

Mike



-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux