[Please respect the Reply-To header] Sean H. wrote: []
My mdadm.conf is configured with UUIDs:
Ok.
DEVICE partitions
Ok.
ARRAY /dev/md0 level=raid5 num-devices=5 uuid=58dcdaf3:bdf3f176:f2dd1b6b:f095c127
Tried the following: 'mdadm --assemble /dev/md0 --uuid 58dcdaf3:bdf3f176:f2dd1b6b:f095c127' ... and got this: mdadm: /dev/md0 assembled from 2 drives - not enough to start the array. (Which is what I've been getting for a while, now.)
Ok. So it's a different problem you have. What's the reason you think it's due to re-numbering/naming of the disks? When you unplugged 3 your disks, I suspect linux noticed that fact and md layer marked them as "failed" in the array, with the 2 still here. Now, when you have all 5 of them again, 2 of them (the ones which were left in the system) are "fresh", and 3 (the ones which were removed) are "old". So you really don't have enough fresh drives to start the array. Now take a look at verbose output of mdadm (see -v option). If my guess is right, use --force option. And take a look at the Fine Manual, after all -- at the section describing assemble mode.
It's possible to correct this issue by unplugging the three drives and plugging them back in and rebooting, so the drives get their original /dev/sd* locations, is it not? (Even if it it possible, I'd like to learn how to fix problems like this at the software level over the hardware level.)
Please answer this question. Why do you think that the array does not start because of disk renumbering? /mjt -- To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html