Degraded Array

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello everyone.

            I was just growing one of my RAID6 arrays from 13 to 14
members.  The array growth had passed its critical stage and had been
growing for several minutes when the system came to a screeching halt.  It
hit the big red switch, and when the system rebooted, the array assembled,
but two members are missing.  One of the members is the new drive and the
other is the 13th drive in the RAID set.  Of course, the array can run well
enough with only 12 members, but it?s definitely not the best situation,
especially since the re-shape will take another day and a half.  Is it best
I go ahead and leave the array in its current state until the re-shape is
done, or should I go ahead and add back the two failed drives?

--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux