Hi all, I've got a 6 disk raid5 array that just failed because some little elf loosened the cable going to two of the disks in a scsi enclosure. Both of them got kicked. Using mdadm -a /dev/md2 gives a segfault in both versions 1.6.0 and 1.7.0. I can query the array using mdadm just fine, the output is attached. Anyone know what is going on? I can attach a strace if that helps... Thanks, Richard
root@aske:/tmp/mdadm-1.6.0# mdadm -Q --examine /dev/sda1 /dev/sda1: Magic : a92b4efc Version : 00.90.00 UUID : 92a6f7c4:4327b1f9:04f5c750:691edd28 Creation Time : Fri Jun 25 22:06:08 2004 Raid Level : raid5 Device Size : 195358336 (186.31 GiB 200.05 GB) Raid Devices : 6 Total Devices : 6 Preferred Minor : 2 Update Time : Fri Aug 13 19:58:01 2004 State : dirty Active Devices : 4 Working Devices : 4 Failed Devices : 2 Spare Devices : 0 Checksum : 7bc34bee - correct Events : 0.43642 Layout : left-symmetric Chunk Size : 128K Number Major Minor RaidDevice State this 0 8 1 0 active sync /dev/sda1 0 0 8 1 0 active sync /dev/sda1 1 1 8 17 1 active sync /dev/sdb1 2 2 8 33 2 active sync /dev/sdc1 3 3 8 49 3 active sync /dev/sdd1 4 4 8 65 4 faulty /dev/sde1 5 5 8 81 5 faulty /dev/sdf1
Attachment:
pgpGAEbCAt2SO.pgp
Description: PGP signature