On Tue, 2003-11-25 at 15:23, list@xxxxxxxxxxxxxxx wrote: > Forrest, > Here is the output of fdisk -l > > Disk /dev/sda: 36.9 GB, 36969185280 bytes > 255 heads, 63 sectors/track, 4494 cylinders > Units = cylinders of 16065 * 512 = 8225280 bytes > > Device Boot Start End Blocks Id System > /dev/sda1 * 1 4047 32507496 fd Linux raid autodetect > /dev/sda2 4048 4366 2562367+ fd Linux raid autodetect > /dev/sda3 4367 4493 1020127+ 82 Linux swap > > Disk /dev/sdb: 36.9 GB, 36969185280 bytes > 255 heads, 63 sectors/track, 4494 cylinders > Units = cylinders of 16065 * 512 = 8225280 bytes > > Device Boot Start End Blocks Id System > /dev/sdb1 * 1 4047 32507496 fd Linux raid autodetect > /dev/sdb2 4048 4366 2562367+ fd Linux raid autodetect > /dev/sdb3 4367 4493 1020127+ 82 Linux swap > > I am assuming it is interesting that fdisk -l still picks up /dev/sda, yes? Yes. Very interesting... > The server is colocated so it will be a little bit of a hassle getting to the > machine and getting everything back online again. It is not the biggest deal > as there are offsite backups, and the machine is fairly new into service. > > Did you see anything in the RAID config that would indicate to you that > swapping in a new drive would fix the problem? No, not really. > > Basically, I will have a "limited" amount of time in the facility to perform > the upgrade, so I would like it to go as seemlessly as possible. What are the > steps in restoring the array? If it were me, and I had good backups, I would just have someone at the colocation facility reboot the machine and watch it boot again, making sure that the BIOS saw two disks, and that there were no errors when trying to start RAID. Of course, if you do that, sda may not come back up, and you may have to do a recovery. I am going to do a RAID 1 installation, and see what happens when I lose a disk. Forrest -- Shrike-list mailing list Shrike-list@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/shrike-list