Hi, >> I have an old fedora server with a raid1 and raid5 array comprised of >> four disks. One of the disks just died, and in the process of trying >> to replace the disk, the server will for some reason no longer boot. I >> think it was a problem with my initrd. I've since replaced the >> defective disk (sdd) with a new one and created the fd partitions the >> same size as they were originally. > > The usual way to do this is > sfdisk -d /dev/originaldevice | sfdisk /dev/newdevice > > But I usually do it as follows, to copy the rest of the boot sector and grub > stuff: > dd if=/dev/originaldevice of=/dev/newdevice bs=512 count=63 > blockdev --rereadpt /dev/newdevice > > (If the original partitions started at 1MB instead of the second cylinder, > it would have been count=2048 above.) > > In both cases, originaldevice is a still-existing original RAID member disc. Should I do this in lieu of a rebuild, or in addition to the rebuild process you've described below? > Assemble it without sdd2 which currently has no superblock, then add the new > drive: > > mdadm --stop /dev/md1 > mdadm --assemble /dev/md1 --auto=yes /dev/sd[abc]2 > mdadm --manage /dev/md1 --add /dev/sdd2 I've tried this, but it complains about the system not being shut down cleanly. Should I just force it? % mdadm --assemble /dev/md1 --auto=yes /dev/sd[abc]2 mdadm: /dev/md1 assembled from 3 drives - not enough to start the array while not clean - consider --force. Thanks again, Alex -- To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html