Hi all, I had a RAID-1 array set up, and one disk went bad. So I replaced it, but then I did something I probably shouldn't have done: remove the UUID's of the root partitions (i.e. the partitions with the / filesystem on). I then used knoppix as a bootdisk to execute the following command: mdadm --create /dev/md1 --raid-devices=2 --level=1 /dev/sda2 /dev/sdb2 That worked, and the partitions synched. However, booting into my main (debian sarge, kernel 2.6.8.1) system, it said "/dev/sd[ab]2 has wrong uuid". At first I didn't understand, since I had no mdadm.conf file, so I didn't see how uuid information could be obtained other than from the partitions itself. That led me to think it might be stored inside initrd, and indeed, mkinitrd solved that error message. However, the remaining problem is that, during boot, /dev/md1 is never correctly set up: - either I have root=/dev/sda2 in grub, and then it will start /dev/md1 with only /dev/sdb2, claiming that /dev/sda2 can't be added because the device is busy. - either I have root=/dev/sdb2 in grub, and then I have the same problem, except that this time /dev/sdb2 can't be added. - either I have root=/dev/md1 in grub, and then it doesn't seem to find an /sbin/init executable, so the boot process doesn't complete. I'm guessing that the "busy" partition is busy because it's being specified as the root partition in grub. However, it would seem logical that /dev/md1 would be constructed *before* the root partition becomes active, no? Anyway, is this a problem which can be solved? Thanks in advance, Hans
Attachment:
pgp0U1HQkUHyE.pgp
Description: PGP signature