6 drives (out of 7) and 1 spare

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



My raid 5 array is down and I'm trying to figure out why.  Here's what
I see:

  # mdadm -S /dev/md0
  # mdadm -A /dev/md0 /dev/hdm4 /dev/hdg2 /dev/hdf2 /dev/hdh2 /dev/hdo2 /dev/hde2 /dev/hdp2
  mdadm: /dev/dm0 has been started with 6 drives (out of 7) and 1 spare
  # 

What does "6 drives (out of 7) and 1 spare" mean?  Is that what I
should expect from a healthy array?  The reiserfs superblock is gone,
so I suspect that this message from mdadm should be telling me
something, but I can't figure out what.

Can I go ahead and tell fsck.reiserfs to re-create a superblock in
this state?  Or should I do something else to my array to get it
running healthy?

Thanks,
Dave

-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux