My raid 5 array is down and I'm trying to figure out why. Here's what I see: # mdadm -S /dev/md0 # mdadm -A /dev/md0 /dev/hdm4 /dev/hdg2 /dev/hdf2 /dev/hdh2 /dev/hdo2 /dev/hde2 /dev/hdp2 mdadm: /dev/dm0 has been started with 6 drives (out of 7) and 1 spare # What does "6 drives (out of 7) and 1 spare" mean? Is that what I should expect from a healthy array? The reiserfs superblock is gone, so I suspect that this message from mdadm should be telling me something, but I can't figure out what. Can I go ahead and tell fsck.reiserfs to re-create a superblock in this state? Or should I do something else to my array to get it running healthy? Thanks, Dave - To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html