http://www.gagme.com/greg/linux/raid-lvm.php you can try this with the spare drives you have. basically what you have to do is to check whether the drive now linked to another device name, is the reason for this problem. once it shows unplugged or failed you can, use your new replacement drive and reboot. Kindly read the comments to this article, Which is very usefull. On 8/27/08, Sujit Karataparambil <sjt.kar@xxxxxxxxx> wrote: > > Thanks much for the reply. For the purposes of this discussion you can > > assume that I've already re-established confidence in the drive, the > > cable, and the controller and that the data on the drives is worthless > > and I just want to get maximum uptime without causing a raid assemble > > problem on the next reboot. > > Good. > > > > > Any idea on my original question? If I re-add the drive using the > > /dev/sdc name will I have problems on the next boot when the drive is > > named /dev/sda? > > Since this seems to be block device it really does not matter. > > > Based on my experience with Linux and other software raid > > implementations, I'm strongly inclined to think that the device naming > > doesn't matter - the system will scan the drives at boot looking for > > Kindly read some decent kernel documentation before you jump up and > say this. Kindly surf the net and read some decent article's before you > do any precious upgrades for now. > > Sujit > > -- > --linux(2.4/2.6),bsd(4.5.x+),solaris(2.5+) > -- --linux(2.4/2.6),bsd(4.5.x+),solaris(2.5+) -- To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html