On Thu, 2006-11-16 at 14:56, Martijn van Oosterhout wrote: > On Thu, Nov 16, 2006 at 12:40:41PM -0800, Glen Parker wrote: > > But now, pull the drive from port 2 and boot the system. You will now > > have SDA,SDB,SDC. The kernel will now fail BOTH of the last two drives > > from the RAID array. The one that was SDC is gone, and obviously fails. > > The one that was SDD is now SDC, so its ID doesn't match what the > > kernel thought it should be, so it fails it too. If you kill the FIRST > > drive in the array, I believe the entire array becomes inoperable > > because of the resulting shift and ID mismatch. > > Is that really so? AIUI the position of the disk in the array is stored > on the disk itself, so it should be able to handle disks moving around > no problem, have you tried it? Just FYI, I've tried this before. yes, linux software RAID, knowing that the linux scsi numbering system is non-deterministic, is designed to handle this. In fact, you can build a RAID5 or RAID0 array of as many disks as you like, shut down the machine, change every single drive ID, and the machine will still find the RAID arrays. Last I tested this was on something like RH 7.2 by the way. Times may have changed, but I can't imagine someone being stupid enough to break the RAID array handling that worked so well back then. > > > So the question is, is there some way to "pin" a drive to a device > > mapping? In other words, is there a way to force the drive on port 0 to > > always be SDA, and the drive on port 2 to always be SDC, even if the > > drive on port 1 fails or is pulled? > > I thought you could do this with options on the command-line, or using > udev. But I don't think it's actually necessary. You can, it's generally not.