Attempting to recover from a downed RAID5 array: # mdadm -S /dev/md0 # mdadm -Af /dev/md0 /dev/hdm4 /dev/hde2 /dev/hdo2 /dev/hdh2 /dev/hdf2 /dev/hdg2 Segmentation fault # Could it be that my mdadm is making some assumptions about my system that it shouldn't? This mdadm was compiled on a different 2.6.8 system, since my RAID5 box can't access its compiler, which is on the RAID5 partition. Is there anyone out there who has a mdadm static binary that is compiled for kernel 2.6.8-gentoo-r3? Here are more details: I have a gentoo system (kernel 2.6.8-gentoo-r3) with a 7 drive RAID5 array. Recently that array went down, and I was advised by the list to try mdadm. When I boot, I see the "Starting up RAID devices: ... * Trying md0... [ !!FAILED ]" and the system drops me to the shell. I type: # cat /proc/mdstat Personalities : [raid1] [raid5] md0 : inactive hdm4[0] hde2[6] hdo2[5] hdh2[4] hdf2[3] hdg2[2] 1464789888 blocks unused devices: <none> OK, the array is missing partition hdp2. dmesg says it has an invalid superblock: # dmesg | grep hdp ide7: BM-DMA at 0xd808-0xd80f, BIOS settings: hdo:DMA, hdp:DMA hdp: WDC WD2500JB-00GVA0, ATA DISK drive hdp: max request size: 1024KiB hdp: 488397168 sectors (250059 MB) w/8192KiB Cache, CHS=30401/255/63, UDMA (100) md: invalid raid superblock magic on hdp2 md: hdp2 has invalid sb, not importing! Adding 64220k swap on /dev/hdp1. Priority:-2 extents:1 I didn't see any indication that there is anything wrong with the hdp drive. Here is my /etc/mdadm.conf file: # cat /etc/mdadm.conf DEVICE partitions ARRAY /dev/md0 level=raid5 num-devices=7 UUID=d312c423:e2eeeff5:3401806f:ab10e3c devices=/dev/ide/host2/bus0/target0/lun0/part2,/dev/ide/host2/bus0/target1/lun0/part2,/dev/ide/host2/bus1/target0/lun0/part2,/dev/ide/host2/bus1/target1/lun0/part2,/dev/ide/host6/bus0/target0/lun0/part4,/dev/ide/host6/bus1/target0/lun0/part2 Since /proc/mdstat reports that six of the seven drives are already assembled, I tried running as-is: # mdadm --run /dev/md0 mdadm: failed to run array /dev/md0: Invalid argument # mdadm -v --run --force /dev/md0 mdadm: failed to run array /dev/md0: Invalid argument Hmm... not very descriptive. I looked at the end of dmesg again for more hints: # dmesg | tail -18 md: pers->run() failed ... raid5: device hdm4 operational as raid disk 0 raid5: device hde2 operational as raid disk 6 raid5: device hdo2 operational as raid disk 5 raid5: device hdh2 operational as raid disk 4 raid5: device hdf2 operational as raid disk 3 raid5: device hdg2 operational as raid disk 2 raid5: cannot start dirty degraded array for md0 RAID5 conf printout: --- rd:7 wd:6 fd:1 disk 0, o:1, dev:hdm4 disk 2, o:1, dev:hdg2 disk 3, o:1, dev:hdf2 disk 4, o:1, dev:hdh2 disk 5, o:1, dev:hdo2 disk 6, o:1, dev:hde2 raid5: failed to run raid set md0 md: pers->run() failed ... It is at this point that I tried stopping and starting the array as described on the mailing list: # mdadm -S /dev/md0 # mdadm -Af /dev/md0 /dev/hdm4 /dev/hde2 /dev/hdo2 /dev/hdh2 /dev/hdf2 /dev/hdg2 Segmentation fault # Any suggestions? Thanks, Dave - To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html