Hi, I have an old fedora server with a raid1 and raid5 array comprised of four disks. One of the disks just died, and in the process of trying to replace the disk, the server will for some reason no longer boot. I think it was a problem with my initrd. I've since replaced the defective disk (sdd) with a new one and created the fd partitions the same size as they were originally. Booting from a current rescue CDROM and trying to use mdadm to reassmble the raid5 array, and I'm having a problem: % mdadm --assemble --auto=yes /dev/md1 /dev/sd[abcd]2 mdadm: no RAID superblock on /dev/sdd2 mdadm: /dev/sdd2 has no superblock - assembly aborted % cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md1 : inactive sda2[0](S) sdb2[2](S) sdc2[1](S) 2928978624 blocks md0 : active raid1 sda1[0] sdb1[2] sdc1[1] 24000 blocks [3/3] [UUU] It looks like the members of md1 are all (S)pares, correct? % cat /etc/mdadm.conf DEVICE /dev/sdb2 /dev/sdd2 /dev/sdc2 /dev/sda2 /dev/sdb1 /dev/sdd1 /dev/sdc1 /dev/sda1 ARRAY /dev/md1 level=5 num-devices=4 devices=/dev/sdd2,/dev/sdc2,/dev/sdb2,/dev/sda2 I recreated the mdadm.conf primarily from memory, but also from some knowledge from mdadm: % mdadm -Es ARRAY /dev/md0 UUID=19fa0ce7:7733d970:be048336:6d8b5ba8 ARRAY /dev/md1 UUID=912aa422:617ee3db:df65aa69:42b7599e Here is some information from sda2 in hopes it will provide details on the array that will be helpful. % mdadm --examine /dev/sda2 /dev/sda2: Magic : a92b4efc Version : 0.90.00 UUID : 912aa422:617ee3db:df65aa69:42b7599e Creation Time : Sat Jun 26 16:19:21 2010 Raid Level : raid5 Used Dev Size : 976326208 (931.10 GiB 999.76 GB) Array Size : 2928978624 (2793.29 GiB 2999.27 GB) Raid Devices : 4 Total Devices : 4 Preferred Minor : 1 Update Time : Sun Jul 31 23:24:25 2011 State : active Active Devices : 3 Working Devices : 3 Failed Devices : 1 Spare Devices : 0 Checksum : 92905b9f - correct Events : 1041521 Layout : left-symmetric Chunk Size : 64K Number Major Minor RaidDevice State this 0 8 2 0 active sync /dev/sda2 0 0 8 2 0 active sync /dev/sda2 1 1 8 18 1 active sync /dev/sdb2 2 2 8 34 2 active sync /dev/sdc2 3 3 0 0 3 faulty removed I'm really not sure what to do next and obviously would like to do everything possible to save the array. How can I either have mdadm rebuild the array using the new disk or start in degraded mode so I can rescue the data? Perhaps there's another option? Thanks, Alex -- To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html